4/10/2026
ABAP Debugging Trick: Store Internal Tables as JSON for Error Analysis
When working with BAPIs or complex function modules in ABAP, debugging production issues can be extremely difficult especially when the error is not reproducible.
One practical solution I’ve used is:
Capture the full request/response payload (including internal tables) as JSON and store it in a custom log table.
This allows you to analyze the exact state of data at the time of failure.
In this article, I’ll walk through:
Converting ABAP internal tables to JSON
Storing large JSON payloads in database tables
Viewing them properly (since SE16N won’t help much 😅)
Step 1: Convert Internal Tables to JSON
DATA: lt_items TYPE TABLE OF zmy_item,
lv_json TYPE string.
/ui2/cl_json=>serialize(
EXPORTING
data = lt_items
RECEIVING
r_json = lv_json
).if you need to keep multiple tables:
TYPES: BEGIN OF ty_bapi_log,
bapi_name TYPE char50,
input_data TYPE STANDARD TABLE OF zinput WITH EMPTY KEY,
output_data TYPE STANDARD TABLE OF zoutput WITH EMPTY KEY,
return_tab TYPE STANDARD TABLE OF bapiret2 WITH EMPTY KEY,
END OF ty_bapi_log.
DATA: ls_bapi_log TYPE ty_bapi_log,
lv_json TYPE string.
" Example source tables
DATA: lt_input TYPE STANDARD TABLE OF zinput,
lt_output TYPE STANDARD TABLE OF zoutput,
lt_return TYPE STANDARD TABLE OF bapiret2.
" Fill your source tables here
" lt_input = ...
" lt_output = ...
" lt_return = ...
" Assign values into the wrapper structure
ls_bapi_log-bapi_name = 'BAPI_PO_CREATE1'.
ls_bapi_log-input_data = lt_input.
ls_bapi_log-output_data = lt_output.
ls_bapi_log-return_tab = lt_return.
" Convert the whole structure into JSON
/ui2/cl_json=>serialize(
EXPORTING
data = ls_bapi_log
compress = abap_true
RECEIVING
r_json = lv_json
).
" Now lv_json contains the full JSON payload
WRITE lv_json.JSON would be like:
{
"bapi_name": "BAPI_PO_CREATE1",
"input_data": [
{
...
}
],
"output_data": [
{
...
}
],
"return_tab": [
{
"type": "E",
"id": "ME",
"number": "083",
"message": "Vendor not found"
}
]
}
To keep multiple internal tables in one JSON payload, create a wrapper structure, assign each internal table to its corresponding component, and serialize the wrapper once. This produces a single structured JSON snapshot that is easy to store and debug later.
Step 2: Store JSON in a Custom Table
Recommended Table Design
Header fields
LOG_IDCREATED_ATCREATED_BYOBJECTERROR_TEXT
Payload field
Use:
LCHR (for large JSON)
Important: LCHR requires a length field.
Example:
JSON_LEN(INT4)JSON_DATA(LCHR)
LCHR (and LRAW) fields must be placed at the end of the table because they are variable-length fields, and ABAP needs all fixed-length fields defined first to manage storage and memory layout correctly.
Let’s break it down simply
1. Two types of fields in ABAP tables
# Fixed-length fields
CHAR
INT
DATE
etc.
These have known size at design time
# Variable-length fields
LCHR(large text)LRAW(large binary)
These are dynamic in size
2. How ABAP stores table rows internally
Think of a DB row like this:
| fixed fields | variable field |Example:
| LOG_ID | CREATED_AT | USER | JSON_LEN | JSON_DATA |ABAP needs to know:
where fixed fields end
where variable data starts
What happens if LCHR is in the middle?
| LOG_ID | JSON_DATA | CREATED_AT |Now problem:
JSON_DATAsize is not fixedso ABAP doesn’t know where
CREATED_ATstarts
This breaks the row structure
That’s why SAP enforces this rule
All variable-length fields must come at the end so the system can safely calculate offsets for fixed fields.
3. Why LCHR also needs a length field
You defined:
JSON_LEN(INT4)JSON_DATA(LCHR)
Why?
Because ABAP needs to know:
“How many bytes of JSON_DATA are actually used?”
So:
JSON_LEN -> tells actual size
JSON_DATA -> holds the contentImportant DDIC Rule
When you define a table:
JSON_LEN TYPE INT4
JSON_DATA TYPE LCHRJSON_LEN must come immediately before JSON_DATA and both must be at the end of the table
LCHR fields must be placed at the end of an ABAP table because they are variable-length fields. Unlike fixed-length fields, their size is not known at design time, so ABAP relies on a length field (e.g., JSON_LEN) to determine how much data is stored. By keeping LCHR fields at the end, the system can safely calculate offsets for all preceding fields and maintain a consistent row structure.
Insert Example
DATA: ls_log TYPE zerror_log.
ls_log-log_id = cl_system_uuid=>create_uuid_c32_static( ).
ls_log-created_by = sy-uname.
ls_log-error_text = lv_error_text.
ls_log-json_len = strlen( lv_json ).
ls_log-json_data = lv_json.
INSERT zerror_log FROM ls_log.Step 3: How to View JSON Properly
If you tried viewing JSON in SE16N, you probably noticed:
truncated data
unreadable long strings
poor formatting
That’s expected.
SE16N is not designed for large text fields like LCHR.
Option 1: Eclipse (ADT) Data Preview
* Better handling of long text
* Easy for developers
But not ideal for business users.
Option 2: Build a Simple Viewer (Recommended)
Create a small report:
ALV list of logs
Double-click -> open full JSON
Display using
CL_GUI_TEXTEDIT
This is the best long-term solution
Important Considerations
Avoid sensitive data
Do NOT log:
passwords
tokens
personal data
Control size
If tables are huge:
log only first N rows
log key fields
or summarize
For very large payloads
Consider:
splitting into header + payload tables
compressing JSON
chunk storage
Real Benefit
This approach helps you:
debug production issues faster
capture exact runtime data
avoid “it works in DEV but not in PROD” situations
Logging JSON snapshots of internal tables is one of the most effective debugging techniques in ABAP for complex integrations.
It’s simple to implement, powerful in practice, and saves hours of guesswork.
Finally, if your process involves multiple BAPI calls within a single execution, it’s a good practice to generate a unique run ID (GUID) and use it as a LOG_ID. By combining this with a CREATED_AT timestamp as part of the primary key, you can group and track all related logs for that specific run. This makes it much easier to analyze what happened during a single execution at any point in time, especially when debugging complex or intermittent issues in production.
If the logging feature is introduced temporarily to diagnose a production issue, it can be controlled using ABAP system variables without requiring an additional transport. By checking variables such as sy-sysid or sy-mandt, the logging logic can be conditionally executed based on the system or client. For example, logging can be enabled only in specific environments like DEV or QAS and skipped in PRD once the issue is resolved. This approach allows the same codebase to behave differently across systems, providing a simple and effective way to disable heavy JSON logging after debugging, without making further code changes or transports.
a better approach is to use a custom control variable in SAP rather than depending only on built-in system fields. For this log feature, you can maintain a configurable flag in a place such as TVARVC or a small custom configuration table, for example ZLOG_ENABLE = X. Then, inside the logging logic, the program reads that variable before writing the JSON payload. If the flag is active, logging runs; if it is blank or turned off, the logger simply skips execution. This is much more flexible because the feature can be enabled or disabled directly in the SAP system without changing code or moving another transport, making it ideal for temporary debugging of BAPI calls in production.
Have you used JSON logging in ABAP for debugging?
Or do you use another approach for capturing runtime issues?