DuckDB’s auto-configuration for JSON reading is not working the way I need it to for some datasets.
Solution
Roll up your sleeves and add some verbosity to the DuckDB JSON reader.
Discussion
DuckDB gives us as much or as little control over how the JSON reader works. Normally, the defaults work well, but we’ve all had one of those JSONs.
The DuckDB online documentation has a wealth of information on all the settings.
One common source of pain is that DuckDB cannot figure out the correct column type due to sampling too few rows. Change sample_size to -1 in a call to read_json() and DuckDB will scan the entire file/input before making assumptions. You can also change it to an arbitrarily large number.
Doing either of those value changes may lead to sub-optimal load performance. It may be better to tell DuckDB the column types up-front.
In the “tags” dataset, DuckDB pokes at all the fields and thinks id is a UUID. I mean, it is, but depending on how it gets stuck into Parquet, it can be a bear to use in other environments:
Note how we also remembered to change the schema type of the id field in the related_tags struct.
As noted in the section on reading CSVs with custom parameters, you can also use ignore_errors to get past any issues reading particular rows, but be sure to diagnose why that was (it’s most likely due to character encoding issues).