Advanced use of JSON Schema in Data-Centric Web APIs with SlashDB
JSON Schema has become the backbone of modern API design—not just for describing data types, but for enforcing consistency, validating input, and communicating intent between systems. Beyond simple type declarations, schemas can capture constraints like string lengths, date formats, nullability, and even complex conditional logic.
In this article, we’ll explore how these advanced features come to life in a real-world setting: SlashDB, a REST API gateway for relational databases. By examining its schema endpoints and validation mechanisms, you’ll see how JSON Schema can:
- Ensure data integrity across CRUD operations
- Provide machine-readable contracts for client applications
- Catch errors early with precise validation rules
Along the way, we’ll highlight practical examples of schema features such as required, oneOf, maxLength, maxItems, format, and pattern, showing how they transform raw database structures into robust, self-validating APIs.
We will be examining how an application can leverage these features to thoroughly examine data for consistency. We will demonstrate what a schema incorporating them looks like, and how an application validating user input can manage data inconsistency errors.
Case Study: SlashDB
SlashDB is an application that creates a REST API gateway for relational databases for reading and writing. Endpoints are created for all resources in a database, and any CRUD operation may be performed by sending the appropriate HTTP request to an endpoint.
In addition to creating endpoints for data, SlashDB has JSON schema endpoints so that you may examine the structure of any resource.
It has two modes of operation - Data Discovery and SQL Pass-Thru.
Data Discovery exposes unique REST endpoints for every resource within a database, from full tables, to subsets of tables, to individual columns or values.
SQL Pass-Thru creates a unique endpoint for a SQL query that a user defines and configures. The query may contain any valid SQL syntax, including complex joins or calls to stored procedures. Queries are configured to execute on a single database that is connected to SlashDB.
Example Data Discovery Endpoints
a database endpoint - https://demo.slashdb.com/db/Chinook
a table endpoint - https://demo.slashdb.com/db/Chinook/Employee
a single record, identified by primary key - https://demo.slashdb.com/db/Chinook/Employee/EmployeeId/1
Data Discovery also uses JSON schemas to validate data when records are being created or changed.
Example SQL Pass-Thru Endpoints
a simple query - https://demo.slashdb.com/query/sales-by-year.json?limit=29
a parameterized query, with a parameter-value pair of
year/2010- https://demo.slashdb.com/query/invoices-by-year/year/2010?limit=29
Examining Data & Schemas
Data Discovery
Let's examine a single record for an Employee in the Chinook database. We'll use SlashDB's filtering features to select the record where the primary key of the table (EmployeeId) is 1.
https://demo.slashdb.com/db/Chinook/Employee/EmployeeId/1.json
1{
2 "__href": "/db/Chinook/Employee/EmployeeId/1.json",
3 "Employee_1": {
4 "__href": "/db/Chinook/Employee/EmployeeId/1/Employee_1.json"
5 },
6 "EmployeeId": 1,
7 "LastName": "Adams",
8 "FirstName": "Andrew",
9 "Title": "General Manager",
10 "ReportsTo": null,
11 "BirthDate": "1962-02-18T00:00:00",
12 "HireDate": "2002-08-14T00:00:00",
13 "Address": "11120 Jasper Ave NW",
14 "City": "Edmonton",
15 "State": "AB",
16 "Country": "Canada",
17 "PostalCode": "T5K 2N1",
18 "Phone": "+1 (780) 428-9482",
19 "Fax": "+1 (780) 428-3457",
20 "Email": "andrew@chinookcorp.com",
21 "Employee_N": {
22 "__href": "/db/Chinook/Employee/EmployeeId/1/Employee_N.json"
23 },
24 "Customer": {
25 "__href": "/db/Chinook/Employee/EmployeeId/1/Customer.json"
26 }
27}
28We see that this record has data fields of various kinds - numbers, strings, dates, null values, as well as some additional metadata.
Now let's retrieve the schema for this record: https://demo.slashdb.com/db/Chinook/Employee/EmployeeId/1.json?schema
1{
2 "title": "Employee",
3 "type": "object",
4 "required": [
5 "EmployeeId",
6 "LastName",
7 "FirstName"
8 ],
9 "properties": {
10 "EmployeeId": {
11 "type": "integer"
12 },
13 "LastName": {
14 "type": "string",
15 "maxLength": 20
16 },
17 "FirstName": {
18 "type": "string",
19 "maxLength": 20
20 },
21 "Title": {
22 "type": [
23 "string",
24 "null"
25 ],
26 "maxLength": 30
27 },
28 "ReportsTo": {
29 "type": [
30 "integer",
31 "null"
32 ]
33 },
34 "BirthDate": {
35 "type": [
36 "string",
37 "null"
38 ],
39 "format": "date-time"
40 },
41 "HireDate": {
42 "type": [
43 "string",
44 "null"
45 ],
46 "format": "date-time"
47 },
48 "Address": {
49 "type": [
50 "string",
51 "null"
52 ],
53 "maxLength": 70
54 },
55 "City": {
56 "type": [
57 "string",
58 "null"
59 ],
60 "maxLength": 40
61 },
62 "State": {
63 "type": [
64 "string",
65 "null"
66 ],
67 "maxLength": 40
68 },
69 "Country": {
70 "type": [
71 "string",
72 "null"
73 ],
74 "maxLength": 40
75 },
76 "PostalCode": {
77 "type": [
78 "string",
79 "null"
80 ],
81 "maxLength": 10
82 },
83 "Phone": {
84 "type": [
85 "string",
86 "null"
87 ],
88 "maxLength": 24
89 },
90 "Fax": {
91 "type": [
92 "string",
93 "null"
94 ],
95 "maxLength": 24
96 },
97 "Email": {
98 "type": [
99 "string",
100 "null"
101 ],
102 "maxLength": 60
103 },
104 "Employee_1": {
105 "type": [
106 "object",
107 "null"
108 ],
109 "properties": {
110 "__href": {
111 "type": [
112 "string",
113 "null"
114 ]
115 }
116 }
117 },
118 "Employee_N": {
119 "type": [
120 "object",
121 "null"
122 ],
123 "properties": {
124 "__href": {
125 "type": [
126 "string",
127 "null"
128 ]
129 }
130 }
131 },
132 "Customer": {
133 "type": [
134 "object",
135 "null"
136 ],
137 "properties": {
138 "__href": {
139 "type": [
140 "string",
141 "null"
142 ]
143 }
144 }
145 },
146 "__href": {
147 "type": [
148 "string",
149 "null"
150 ]
151 }
152 }
153}
154Examining this schema and comparing to the data, we can see that various JSON schema properties are present:
there are
requiredfields, specifying which fields in the record cannot not be null - this attribute also contains the table's primary key, in this caseEmployeeIdsome
stringfields have themaxLengthproperty, reflecting that a maximum length is defined for the database columnsome
stringfields have theformatproperty, that identifies them asdate-timefields - validation/parsing tools can use this to ensure data validitya
typewith an array of values means the field can be any one of those types - e.g.["string", "null"]is a string field that is nullable
Endpoints With Multiple Records
To pull multiple records:
https://demo.slashdb.com/db/Chinook/Employee.json?limit=3
And the schema:
https://demo.slashdb.com/db/Chinook/Employee.json?schema
1{
2 "title": "Employee",
3 "type": "array",
4 "items": {
5 "type": "object",
6 "properties": {
7 "EmployeeId": {
8 "type": "integer"
9 },
10 "LastName": {
11 "type": "string",
12 "maxLength": 20
13 },
14 "FirstName": {
15 "type": "string",
16 "maxLength": 20
17 },
18 "Title": {
19 "type": [
20 "string",
21 "null"
22 ],
23 "maxLength": 30
24 },
25 "ReportsTo": {
26 "type": [
27 "integer",
28 "null"
29 ]
30 },
31 "BirthDate": {
32 "type": [
33 "string",
34 "null"
35 ],
36 "format": "date-time"
37 },
38 "HireDate": {
39 "type": [
40 "string",
41 "null"
42 ],
43 "format": "date-time"
44 },
45 "Address": {
46 "type": [
47 "string",
48 "null"
49 ],
50 "maxLength": 70
51 },
52 "City": {
53 "type": [
54 "string",
55 "null"
56 ],
57 "maxLength": 40
58 },
59 "State": {
60 "type": [
61 "string",
62 "null"
63 ],
64 "maxLength": 40
65 },
66 "Country": {
67 "type": [
68 "string",
69 "null"
70 ],
71 "maxLength": 40
72 },
73 "PostalCode": {
74 "type": [
75 "string",
76 "null"
77 ],
78 "maxLength": 10
79 },
80 "Phone": {
81 "type": [
82 "string",
83 "null"
84 ],
85 "maxLength": 24
86 },
87 "Fax": {
88 "type": [
89 "string",
90 "null"
91 ],
92 "maxLength": 24
93 },
94 "Email": {
95 "type": [
96 "string",
97 "null"
98 ],
99 "maxLength": 60
100 },
101 "Employee_1": {
102 "type": [
103 "object",
104 "null"
105 ],
106 "properties": {
107 "__href": {
108 "type": [
109 "string",
110 "null"
111 ]
112 }
113 }
114 },
115 "Employee_N": {
116 "type": [
117 "object",
118 "null"
119 ],
120 "properties": {
121 "__href": {
122 "type": [
123 "string",
124 "null"
125 ]
126 }
127 }
128 },
129 "Customer": {
130 "type": [
131 "object",
132 "null"
133 ],
134 "properties": {
135 "__href": {
136 "type": [
137 "string",
138 "null"
139 ]
140 }
141 }
142 },
143 "__href": {
144 "type": [
145 "string",
146 "null"
147 ]
148 }
149 },
150 "required": [
151 "EmployeeId",
152 "LastName",
153 "FirstName"
154 ]
155 },
156 "minItems": 1
157}
158In this last endpoint, the schema is the same but is nested in an array of items since there are multiple records. Additionally, the JSON schema minItems property is present to denote that data being validated against this schema should contain at least one record.
One other parameter we can use with SlashDB's schemas is the cardinality query string parameter:
https://demo.slashdb.com/db/Chinook/Employee.json?schema&cardinality=3
This will add a maxItems property to the schema, specifying that the data being validated against this schema should contain at most 3 records.
Validating Input
Let's see how SlashDB uses JSON schemas to validate user input.
Now that we've examined the Employee data and its schema, we know what the structure of a record in the table is, and we can create a new one.
To create a new record in SlashDB, we send an HTTP payload to the endpoint of the table we want to add a record to:
https://demo.slashdb.com/db/Chinook/Employee.json
First, let's send a request to add a record with a field that isn't part of the table schema:
1curl -X POST "https://demo.slashdb.com/db/Chinook/Employee.json" -d '{"EmployeeId": 999, "FirstName": "James", "LastName": "Wood", "UnitPrice": 1}'
2
3{"http_code": 400, "description": "Could not parse JSON: Additional properties are not allowed ('UnitPrice' was unexpected)"}
4Whoops! UnitPrice isn't a column in the table, and SlashDB's JSON schema validator has caught that.
Now let's try to send a payload with a PostalCode that has more than 10 characters:
1curl -X POST "https://demo.slashdb.com/db/Chinook/Employee.json" -d '{"EmployeeId":999, "FirstName": "James", "LastName":"Wood", "PostalCode": "aaaaaaaaaaaaaa"}'
2
3{"http_code": 400, "description": "Could not parse JSON: The \"PostalCode\": error at node '/0/PostalCode': 'aaaaaaaaaaaaaa' is too long"}
4The validator used the maxLength property on the PostalCode value to check the length of the string, and found a problem.
Lastly, let's try to send a payload with a BirthDate that is not in the proper date-time format:
1curl -X POST "https://demo.slashdb.com/db/Chinook/Employee.json" -d '{"EmployeeId":999, "FirstName": "James", "LastName":"Wood", "BirthDate": "01-01-2000"}'
2
3{"http_code": 400, "description": "Could not parse JSON: The \"BirthDate\": error at node '/0/BirthDate': '01-01-2000' is not a 'date-time'"}
4The validator used the format property of the BirthDate to check that the value was formatted as an IS0 8601 datetime string, and detected an error.
SQL Pass-Thru
Schemas are available for queries executed through SQL Pass-Thru. They are useful for examining the structure of records and validating data integrity on the client side; they are not used internally by SlashDB to validate data for queries that modify, add, or delete records.
Let's examine the sales-by-year query and its schema:
https://demo.slashdb.com/query/sales-by-year.json?limit=10
Query record set:
1[
2 {
3 "Year": 2009,
4 "Total": 448.4801
5 },
6 {
7 "Year": 2010,
8 "Total": 481.45
9 },
10 {
11 "Year": 2011,
12 "Total": 469.58
13 },
14 {
15 "Year": 2012,
16 "Total": 477.53
17 },
18 {
19 "Year": 2013,
20 "Total": 450.58
21 }
22]
23And the schema:
1{
2 "type": "array",
3 "items": {
4 "type": "object",
5 "properties": {
6 "Year": {
7 "type": [
8 "integer",
9 "null"
10 ]
11 },
12 "Total": {
13 "type": [
14 "number",
15 "null"
16 ]
17 }
18 }
19 },
20 "minItems": 1
21}
22In the schema, all records are grouped inside of an array of items, with an object property for each column in the results. Note that in SQL Pass-Thru, the type will always be an array of values - the first value being the actual type, the second being null; this allows the schema to account for NULL values in the result.
You can now use this schema to examine the structure of the query's output, and also to validate data with any client-side tools that support JSON schemas.
SlashDB Internal API & Schemas
SlashDB provides endpoints to its own internal API for managing connections to databases, users, and query configurations. All endpoint transactions are conducted using JSON.
JSON schemas are also available for determining what fields and data types are needed to interact with the API programmatically.
A couple more JSON schema properties are used in internal schemas: pattern and oneOf.
Example Endpoints
- a query definition - https://demo.slashdb.com/querydef/active-users
API & Schemas
Let's look at the endpoint for the active-users query definition. First, the query definition itself:
https://demo.slashdb.com/querydef/active-users.json
1{
2 "query_id": "active-users",
3 "database": "reshareu",
4 "desc": "List of active ReshareU users",
5 "sqlstr": "select a.id, c.*\nfrom account a\njoin oauth o on a.id = o.account_id\njoin contact_detail c on o.user_id = c.user_id\nwhere active=True\norder by first_name, last_name",
6 "http_methods": {
7 "GET": true,
8 "POST": true,
9 "PUT": true
10 },
11 "creator": "admin",
12 "read": [],
13 "write": [
14 "public"
15 ],
16 "execute": [],
17 "url_template": "/query/active-users.json"
18}
19Now, the schema:
https://demo.slashdb.com/querydef/active-users.json?schema
1{
2 "$schema": "http://json-schema.org/draft-07/schema#",
3 "type": "object",
4 "additionalProperties": false,
5 "properties": {
6 "creator": {
7 "type": [
8 "string",
9 "null"
10 ],
11 "pattern": "^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-]*$",
12 "maxLength": 128
13 },
14 "database": {
15 "type": "string",
16 "pattern": "^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-]*$",
17 "maxLength": 128
18 },
19 "desc": {
20 "type": [
21 "string",
22 "null"
23 ],
24 "maxLength": 4096
25 },
26 "execute": {
27 "oneOf": [
28 {
29 "type": "array",
30 "items": {
31 "type": "string",
32 "pattern": "^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-]*$",
33 "maxLength": 128
34 }
35 },
36 {
37 "type": "null"
38 }
39 ]
40 },
41 "http_methods": {
42 "type": "object",
43 "additionalProperties": false,
44 "minProperties": 1,
45 "properties": {
46 "DELETE": {
47 "type": "boolean"
48 },
49 "GET": {
50 "type": "boolean"
51 },
52 "POST": {
53 "type": "boolean"
54 },
55 "PUT": {
56 "type": "boolean"
57 }
58 }
59 },
60 "query_id": {
61 "type": "string",
62 "pattern": "^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-]*$",
63 "maxLength": 128
64 },
65 "read": {
66 "oneOf": [
67 {
68 "type": "array",
69 "items": {
70 "type": "string",
71 "pattern": "^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-]*$",
72 "maxLength": 128
73 }
74 },
75 {
76 "type": "null"
77 }
78 ]
79 },
80 "sqlstr": {
81 "type": "string",
82 "minLength": 1
83 },
84 "write": {
85 "oneOf": [
86 {
87 "type": "array",
88 "items": {
89 "type": "string",
90 "pattern": "^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-]*$",
91 "maxLength": 128
92 }
93 },
94 {
95 "type": "null"
96 }
97 ]
98 },
99 "url_template": {
100 "type": [
101 "string",
102 "null"
103 ]
104 }
105 },
106 "required": [
107 "query_id",
108 "database",
109 "http_methods",
110 "sqlstr"
111 ],
112 "optional": [
113 "creator",
114 "desc",
115 "execute",
116 "read",
117 "write",
118 "url_template"
119 ]
120}
121We see that the pattern attribute is present in several portions of the schema here. Used in conjunction with string, pattern allows you to specify a regular expression that a value must match against.
E.g - the query_id attribute, which is a unique string that identifies the query, must be no more than 128 characters long (maxLength property), and the string must match this regular expression - ^(?![xX][mM][lL])[a-zA-Z_][a-zA-Z0-9@_~-].
We also see the oneOf attribute - used here to define that an attribute is either null/not present, or an array of items (e.g. the read or write properties).
Conclusion: JSON Schema as a Bridge Between Data and APIs
What we’ve seen through SlashDB is more than just schema decoration—it’s JSON Schema acting as a bridge between relational data models and modern web APIs. By embedding constraints like maxLength, format, and pattern directly into schemas, SlashDB ensures that data integrity is preserved not only in the database but also at the API boundary.
For developers, this means:
- Confidence in data quality before it ever reaches the database
- Clear contracts for client applications consuming APIs
- Reduced friction when integrating across systems
As JSON Schema continues to evolve, its role in API ecosystems will only grow. Whether you’re designing a new service or modernizing legacy systems, adopting schema-driven validation is one of the most effective ways to build APIs that are both resilient and self-documenting.
If you’re already using JSON Schema, consider how tools like SlashDB can extend its power into your data layer. And if you’re just getting started, let this case study be a reminder: schemas aren’t just documentation—they’re executable rules that make your APIs smarter, safer, and more reliable.
📌 Key Takeaways: JSON Schema in Data-Centric APIs
- Type & Nullability → Use
typewith arrays (e.g.["string", "null"]) to explicitly allow nullable fields. - Required vs. Optional →
requiredenforces critical fields (like primary keys), while optional fields can be left null or omitted. - String Constraints →
maxLengthensures values (e.g.PostalCode) don’t exceed database column limits. - Date & Time Validation →
format: "date-time"enforces ISO 8601 compliance for temporal fields. - Array Controls →
minItemsandmaxItemsdefine expected record counts in query results. - Pattern Matching →
patternapplies regex rules to enforce naming conventions or input formats. - Conditional Structures →
oneOfallows flexible definitions (e.g. property may benullor an array of strings). - Error Transparency → Schema-driven validation provides clear, actionable error messages when input fails.
- API Contracts → Schemas double as machine-readable documentation, ensuring consistency across clients and services.