Hugo Hacker News

JSON for Modern C++ version 3.10.0

lazypenguin 2021-08-19 12:20:04 +0000 UTC [ - ]

The comments here are uncharitable. For some reason when someone posts a C++ project all the nitpickers come out of the woodwork. Is it as fast as simdjson? No but simdjson is read only. Is it as memory efficient as rapidjson sax writer? No, but it’s a hell of a lot more ergonomic. Is it “modern” C++? Yes it is, just open the source and you can see you for yourself. Nitpicking which standard is allowed to be call modern is silly. I’d argue modern C++ is a style or way of thinking. Would I use it for all projects? Maybe not but this is a good, high quality library.

Dear @nlohmann and contributors. Thank you for this library, I’ve used it in the past and appreciate your work and efforts.

nikki93 2021-08-19 05:06:54 +0000 UTC [ - ]

The API side of this library is pretty good. I've generally avoided it due to the compile time, and preferred rapidjson due to both better performance and compile time if I wanted a C++ API, or cJSON if a C API is fine and I really just want an extremely low compile time (lately I've been playing with a live coding C++ context for a game engine, as a side project, I use cJSON there; rapidjson on main project though) (cJSON's tradeoff is it sticks with a single global allocator that you can only customize in one location, and also seems to perf slightly lower on parse).

The cJSON's C API or rapidjson's somewhat less ergonomic API has felt fine because I usually have read / write be reflective and don't write that much manual JSON code.

Specifically for the case where compile time doesn't matter and I need expressive JSON object mutation in C++ (or some tradeoff thereof), I think this library seems good.

Another big feature (which I feel like it could be louder about -- it really is a big one!) is that you can also generate binary data in a bunch of formats (BSON, CBOR, MessagePack and UJBJSON) that have a node structure similar to JSON's -- from the same in-memory instances of this library's data type. That sort of thing is something I've desired for various reasons (smaller file types with potentially better network send time for asynchronous sending / downloads (for real-time multiplayer you still basically want to encode to something that doesn't embed field string names)). I do think I may end up doing it at one layer above in the engine level and just have a different backend other than cJSON etc. too though...

qalmakka 2021-08-19 09:11:14 +0000 UTC [ - ]

While I agree on you on compile times getting slower with nlohmann/json, I think that its performances are fairly adequate. I've been using it on the ESP32 in single core mode and it's still fast enough for everything. Unless you parse tens of millions of JSON objects per second, you won't really notice any difference between the various JSON libraries.

Ironically, cJSON is _worse_ in my use case due to it not supporting allocators at all, as you wrote. Nlohmann fully supports C++ allocators, so it's trivial to allocate on SPIRAM instead of the limited DRAM of the ESP32. Support for allocators is why I often tend to pick C++ for projects with heterogeneous kinds of memory.

Also, Nlohmann/JSON supports strong typed serialization and deserialization, which in my experience vastly reduces the amount of errors people tend to make when reading/writing data to JSON. I've found simpler to rewrite some critical code that was parsing very complex data using cJSON to Nlohmann/JSON than fixing up all the tiny errors the previous developer made while reading "raw" JSON, parsing numbers as enums, strings as other kinds of enums, and so on.

cibomahto 2021-08-19 12:17:49 +0000 UTC [ - ]

For what it's worth, cJSON does support a (global) allocator override, using cJSON_InitHooks().

Here's what I use on ESP32 to push allocation to SPIRAM: https://gist.github.com/cibomahto/a29b6662847e13c61b47a194fa...

nlohmann 2021-08-19 11:57:57 +0000 UTC [ - ]

Great to hear that you made good experiences with user-defined allocators. It would be great if you could provide a pointer to an example, because we always fall short in testing the allocator usage. So if you had a small example, you could really improve the status quo :)

ephaeton 2021-08-19 05:19:52 +0000 UTC [ - ]

Plusses: as a user of nlohmann-json, I also like the support for json pointer, json patch and json merge patch which comes in handy at times. I like their way of handling to/from_json (declare functions with appropriate signature and namespacing and the lib will pick them up seamlessly in implicit conversions). "standard" container facades are appreciated.

Minusses: although I'd like a way to append a (C-)array into a JSON-array "at once" and not iteratively (i.e., O(n) instead of O(n log n)). Also, lack of support for JSON schema is .. slightly annoying.

nlohmann 2021-08-19 11:56:23 +0000 UTC [ - ]

Can you elaborate on your minuses - I don't understand what you mean with "at once" in that context.

AlexandrB 2021-08-19 12:26:30 +0000 UTC [ - ]

For JSON schema, I've found that this third party library works well: https://github.com/pboettch/json-schema-validator

nikki93 2021-08-19 05:25:16 +0000 UTC [ - ]

Yeah the support for pointer / patch etc. is a definite plus. The customization point thing I tend to do with my own customization points, but it's pretty good if a bunch of libraries settle on a standard customization point (eg. I think serde in Rust has achieved that a bit due to the ecosystem there) (definitely want it to be static).

I didn't realize that about the array complexity. Can you not just initialize a JSON array of N elements in O(N) (conceding a `.reserve(N)` style call beforehand if required)? rapidjson is pretty good about that sort of thing, cJSON's arrays are linked list so I basically think of its performance as at a different level and it's mostly about compile time for me.

ephaeton 2021-08-19 05:37:36 +0000 UTC [ - ]

I haven't found a way to do that. The only O(n) to-json-array way is using the initializer list constructor AFAICT.

2021-08-19 12:39:01 +0000 UTC [ - ]

esprehn 2021-08-19 03:49:46 +0000 UTC [ - ]

This doesn't appear to be all that "modern": it doesn't support string_view for lookup without extra allocations, and it doesn't support unordered_map (or even use it as the default) for objects.

It seems they're targeting C++11?

nlohmann 2021-08-19 12:01:59 +0000 UTC [ - ]

The development started in 2013 when C++11 was still modern. Since then, the term "modern C++" is, to my understanding, a synonym for "C++11 and later". Of course, some code may look dated compared to newer C++ constructs, but the main goal is to integrate JSON into any C++ code bases without making it look odd.

The string_view lookup is nearly done, but I did not want to wait for it, because it would have delayed the release even more.

I'm also working on supporting unordered_map - using it as container for objects would be easy if we would just break the existing API - the hard part is to support it with the current (probably bad designed) template API.

MauranKilom 2021-08-19 08:11:12 +0000 UTC [ - ]

> and it doesn't support unordered_map (or even use it as the default) for objects.

std::unordered_map has massive overhead in space and time. Maybe I misunderstand your idea, but outside of extremely niche use cases, std::unordered_map is basically never the right tool (unless you don't care in the slightest about performance or memory overhead).

agent327 2021-08-19 08:36:08 +0000 UTC [ - ]

Source? A hash map should in principle be faster than a tree, with far fewer comparisons, and against a computed hash value instead of the whole object, in principle.

einpoklum 2021-08-19 10:57:01 +0000 UTC [ - ]

Well, std::unordered_map is faster than than std::map, but it is still slow, since it uses lists for its buckets, with lots of dynamic allocation. See also:

https://stackoverflow.com/a/42588384/1593077

and the link there. Not sure that's what GP meant though.

agent327 2021-08-19 13:15:01 +0000 UTC [ - ]

I don't believe it has to, some kind of bucket-optimisation (multiple elements per allocation, instead of one element per allocation) should also meet all constraints the standard sets for this type.

And I don't like it when people use ridiculous hyperbole to describe what is in reality barely perceptible overhead, so it would be good if the person I originally responded to came out and defended his POV. Hopefully using actual numbers instead of agitated handwaving and screaming...

MauranKilom 2021-08-19 16:06:10 +0000 UTC [ - ]

> some kind of bucket-optimisation (multiple elements per allocation, instead of one element per allocation) should also meet all constraints the standard sets for this type

Pretty sure that makes it impossible to implement merge without pointer/iterator invalidation, which is a constraint imposed by the standard (https://en.cppreference.com/w/cpp/container/unordered_map/me...).

> it would be good if the person I originally responded to came out and defended his POV

See https://probablydance.com/2017/02/26/i-wrote-the-fastest-has... if you want numbers, or for example this talk by the same author https://www.youtube.com/watch?v=M2fKMP47slQ.

beached_whale 2021-08-19 15:49:05 +0000 UTC [ - ]

Abseil has a btree based hash map and it performs much better. I think it maintains the same constrains(iterator invalidation/exception safety...) as std::unordered_map.

The issue is that the implementors won't change their implementation of unordered_map as it's an ABI break, to say it simply. It could be better. But also, there are other tools like flat maps/open addressing that are not done in the std library.

wmkn 2021-08-19 12:41:56 +0000 UTC [ - ]

What? I would argue that unordered_map is nearly always a reasonable choice when in need of an unordered associative container. There is nothing particular wrong with it.

Not sure what your rationale is for saying it’s only useful for extremely niche use cases. I would be curious to know why you think so.

MauranKilom 2021-08-19 16:17:19 +0000 UTC [ - ]

It has the right asymptotic guarantees for a hash map, but beyond that it doesn't have much going for it. The degree to which it emphasizes pointer/iterator stability and arbitrary load factors is not a good tradeoff in almost any practical case (unless you do not care about performance).

See https://www.youtube.com/watch?v=M2fKMP47slQ for a more comprehensive discussion, or the sibling threads here.

spacechild1 2021-08-19 10:29:57 +0000 UTC [ - ]

> but outside of extremely niche use cases, std::unordered_map is basically never the right tool

std::map is only useful if you need the data to be ordered, otherwise std::unordered_map is the right default choice.

> std::unordered_map has massive overhead in space and time.

Yes, std::unordered_map uses a bit more memory, but please show me some benchmarks where it is slower than std::map - especially with string keys!

BTW, for very small numbers of items the most efficient solution is often a plain std::vector + linear search.

MauranKilom 2021-08-19 16:14:07 +0000 UTC [ - ]

I didn't meant to imply it will perform worse than std::map. My point is that std::unordered_map has very particular design tradeoffs (primarily the requirement for arbitrary load factors and the bucket interface) that fundamentally impose large space and time overhead: Every node has to be stored as separately allocated linked list entry.

That's sometimes ok, particularly if you don't care about performance (beyond asymptotic behavior). But if you do, you'd probably be well-advised to try out hash map implementations that actually don't leave performance on the table by design.

adzm 2021-08-19 12:08:14 +0000 UTC [ - ]

> BTW, for very small numbers of items the most efficient solution is often a plain std::vector + linear search.

This is why flat_map is a great solution. It's basically an ordered vector with a map-like interface, but without the memory allocation (and cache locality issues) for each node. Until you start getting into thousands of elements it is probably the best choice of container.

codr7 2021-08-19 11:40:19 +0000 UTC [ - ]

It's a gradient, from the vector you mentioned all the way up to hash maps.

And ordered maps fall somewhere in the middle.

They might make sense if you're creating massive amounts of medium sized maps.

Or if you need range searches etc.

It's all compromises, there are no hard rules.

batty_alex 2021-08-19 11:44:45 +0000 UTC [ - ]

I think you might have it backwards. std::map is the one you only want to reach for in cases where you need a sorted dictionary (it sorts after insert)

std::unordered_map is typically the go-to hash map on any given C++ day. You’re probably fine with std::map outside of large n

EDIT: being pedantic

adzm 2021-08-19 12:03:12 +0000 UTC [ - ]

You can provide alternative collections if needed by templating basic_json; std::map is just a default. Personally I like using a flat_map which is an ordered map built on top of a vector, which provides nice cache locality and is generally even faster than a hash map / unordered_map except for extremely large objects.

beached_whale 2021-08-19 04:25:42 +0000 UTC [ - ]

The C++11 is still common enough that if you want a large usage group it's the target. However, it's parsing to a DOM and doesn't use features like PMR, so the language features needed won't be more than std containers and stuff.

quietbritishjim 2021-08-19 10:58:57 +0000 UTC [ - ]

I think it makes sense to target C++11. Just that you shouldn't then call the library "modern".

nlohmann 2021-08-19 12:03:30 +0000 UTC [ - ]

There is also a SAX parser.

But to be honest, I have not yet played around with PMR.

beached_whale 2021-08-19 12:50:25 +0000 UTC [ - ]

I know when I was benching JSON Link, I saw 30-50% increase in perf at the time when using better allocators and PMR can help a lot there with things like bump allocators.

pjmlp 2021-08-19 05:13:42 +0000 UTC [ - ]

Specially among the C+ community groups.

beached_whale 2021-08-19 05:34:40 +0000 UTC [ - ]

It is changing and I saw some survey’s showing 17 around 1/2 the shops. I know for my JSON library it wouldn’t be possible without C++ 17 so it’s a loss of eyes

pjmlp 2021-08-19 05:38:52 +0000 UTC [ - ]

Yeah, thing is how much C++17 is actually used beyond the language version.

For example, C++/WinRT is a C++17 library, yet plenty of samples use C style strings and vectors.

howolduis 2021-08-19 04:17:29 +0000 UTC [ - ]

people love to complain

rualca 2021-08-19 05:51:07 +0000 UTC [ - ]

> people love to complain

Do you find it unreasonable to point out that what's been advertised doesn't match what's being offered?

nlohmann 2021-08-19 12:04:27 +0000 UTC [ - ]

As I tried to describe earlier: "modern C++" is not necessarily "using the latest standard", but rather "C++ since it was updated with C++11 (and later)".

rualca 2021-08-19 12:08:42 +0000 UTC [ - ]

Aiming at C++11 is not a reasonable definition of "modern", as it was C++'s second ISO standard that was published a decade ago and which has since seen between two and three major updates (depending on the opinions on c++14)

nlohmann 2021-08-19 12:13:01 +0000 UTC [ - ]

It's not the "aiming at C++11", but rather "Write code that does not look odd in a code base that is using constructs from C++11, C++14, C++17, etc." - The library uses C++11 to implement an API that should not feel weird when used together with other containers from the STL.

maleldil 2021-08-19 12:22:52 +0000 UTC [ - ]

C++11 was a huge shift in how C++ is written, and term coined for "code written using the new techniques" was "Modern C++". Whether you think that term should instead mean "the latest C++ version" is a different matter altogether.

bobnamob 2021-08-19 12:21:10 +0000 UTC [ - ]

In C++ land "modern" has become synonymous with post 11. Effectively a domain specific definition. Reasonable considering the difference between pre and post 11. Pre and post 20 will probably be treated similarly in a decade

q-rews 2021-08-19 05:18:47 +0000 UTC [ - ]

People love correct statements. C++11 was release 10 years ago and there are 3 newer versions.

pjmlp 2021-08-19 05:12:55 +0000 UTC [ - ]

For many even using RAII, which even Turbo Vision for MS-DOS made use of, is already "modern".

mike_hock 2021-08-19 08:29:45 +0000 UTC [ - ]

It uses the ordering of std::map for its own comparison operators and hash function.

microtherion 2021-08-19 13:02:23 +0000 UTC [ - ]

An awesome library, wonderfully convenient to use. I particularly like being able to use the same code for JSON and CBOR.

SloopJon 2021-08-19 05:14:24 +0000 UTC [ - ]

This is the library I settled on for a project a couple of years ago. Although I was intrigued by the magic of expression templates, my usage is actually pretty straightforward and boring. One thing it has that some of the other libraries I evaluated lack is the ability to parse numeric strings yourself, which I wanted for a decimal floating-point type.

There are lots of other libraries, and surely some faster ones, but I haven't felt any need to change. I was nervous when I got an inexplicable overload ambiguity in xlclang++ for AIX, but it went away when I grabbed a newer version of the library.

iamvnraju 2021-08-19 05:39:53 +0000 UTC [ - ]

I remember having used nlohmann's single header JSON parser library around 2018 on a client app and found it to be pretty nifty. For REST, I had had used Boost Beast.

_3u10 2021-08-19 04:31:29 +0000 UTC [ - ]

Cool project. Others may also be interested in simdjson which parses at about 4GB/sec.

https://github.com/simdjson/simdjson/blob/master/doc/basics....

adzm 2021-08-19 12:10:57 +0000 UTC [ - ]

simdjson is an awesome project. However, it's analogous to a SAX parser for XML vs a DOM model. If that is all you need (or if you are building something like a DOM yourself) then it is pretty much impossible to beat. Having an entire JSON document parsed and in memory at once though has a bunch of advantages for a friendlier API.

joshxyz 2021-08-19 08:09:14 +0000 UTC [ - ]

yes another great one

these projects are just commendable!

modeless 2021-08-19 06:55:29 +0000 UTC [ - ]

Huh, I was just in some code that uses this library a few days ago. I wish it included a base64 implementation for more efficiently encoding arrays of bytes or floats. JSON is inefficient for large arrays of numbers.

jcelerier 2021-08-19 07:00:31 +0000 UTC [ - ]

Why wouldn't you use a base64 library ? This is prime library bloat mindset. Base64 has nothing to do in a JSON lib; there are a ton of implementations out there.

modeless 2021-08-19 14:28:34 +0000 UTC [ - ]

A base64 implementation is very small, but not completely trivial to the point where you should implement it yourself, or paste in a one liner from Stack Overflow. It's widely used with JSON. C++ has no package manager to make it easy to import tiny libraries for purposes like this, so C++ libraries should be a little more self contained than npm packages or rust crates. It would absolutely be reasonable to include a base64 implementation in a C++ JSON library for those reasons.

2021-08-19 12:51:24 +0000 UTC [ - ]

tsbinz 2021-08-19 07:25:26 +0000 UTC [ - ]

Uh, it has plenty of methods for more compact encodings? https://github.com/nlohmann/json#binary-formats-bson-cbor-me...

modeless 2021-08-19 14:23:52 +0000 UTC [ - ]

Sure, if you want to change the format of the whole message. If you're communicating with something that accepts JSON and you can't or don't want to change that, then having a more efficient option is nice.

versatran01 2021-08-19 03:58:49 +0000 UTC [ - ]

I prefer boost json

darknavi 2021-08-19 05:27:59 +0000 UTC [ - ]

How do people go about integrating boost these days? For any cross platform projects I use CMake and Boost seems like a _very_ scary library to try and build/ship on multiple platforms.

chrisjericho 2021-08-19 05:54:24 +0000 UTC [ - ]

I use CMake almost exclusively (even for Windows) and using "find_package(Boost REQUIRED)" works most of the time. A lot of common boost libraries (algorithm, geometry) are header only and are not much of a hassle to include.

IIRC, you would need to modify the statement to "find_package(Boost REQUIRED COMPONENTS filesystem)" if you want to use boost dependencies that need a separate .dll/.so (in this case boost::filesystem). However I rarely need to use these, so I may be wrong.

electroly 2021-08-19 15:37:41 +0000 UTC [ - ]

FWIW Boost.Json in particular doesn't have any source files, it's just header-only templates. You don't actually need to build or link boost to use it; you just need the header files to be in the include path.

Johnnynator 2021-08-19 05:59:34 +0000 UTC [ - ]

You should be able to build and include boost json as a standalone subproject in CMake if you are using C++17. (Or also possible to use as header only lib)

It gets far more complicated with C++11, since you also need a ton of other boost modules there.

For more Details you can read the Readme of it. https://github.com/boostorg/json

nly 2021-08-19 05:55:35 +0000 UTC [ - ]

CMake supports Boost out of the box, you just need to set BOOST_ROOT to your Boost install path.

With proper CI it's not a big deal

jcelerier 2021-08-19 07:04:53 +0000 UTC [ - ]

    wget boost_1_77.tar.gz
    tar xaf boost*
    cmake ... -DBOOST_ROOT=.../boost_1_77
Works for the huge majority of boost ; most parts that required a build were the parts that were integrated in c++ like thread, regex, chrono, filesystem

geokon 2021-08-19 06:16:59 +0000 UTC [ - ]

Use Hunter for dependency management

https://github.com/cpp-pm/hunter

There is a bit of a learning curve, but it's the only dependency manager that does things right (all from within CMake)

rualca 2021-08-19 05:56:52 +0000 UTC [ - ]

What exactly makes Boost scary to ship? I understand that Boost requires some attention to build right across platforms (see zlib support in Windows) but other than this it's pretty straight forward to add a FindBoost statement to your code, and move onto other things.

beached_whale 2021-08-19 04:07:26 +0000 UTC [ - ]

It does seem to have a very similar interface to this and has the performance of RapidJSON. So a nice balance in that regard.

newusertoday 2021-08-19 06:30:29 +0000 UTC [ - ]

off topic, is there any tool which can be used to query json file that also has autocomplete functionality for keys?

einpoklum 2021-08-19 10:54:45 +0000 UTC [ - ]

From the repository's README:

> Speed. There are certainly faster JSON libraries out there. However, if your goal is to speed up your development by adding JSON support with a single header, then this library is the way to go.

It is essentially admitting to being slow(er). We are not supposed to pay for code modernity with performance. In fact, it should be the other way around.

Does someone know the innards of this library as opposed to, say, jsoncpp or other popular libraries, and describe the differences?

nlohmann 2021-08-19 12:07:11 +0000 UTC [ - ]

The library aims at making anything related to JSON straightforward to implement. For some applications, this is a good compromise. The comment in the README is like the answer to a FAQ "How fast is it?"

einpoklum 2021-08-19 16:12:59 +0000 UTC [ - ]

> This is a good compromise

You're making the false assumption that there is a compromise to be made - but there isn't. A modern JSON library should be the fastest (or about the same speed as the fastest), and vice-versa.

TeeMassive 2021-08-19 04:45:00 +0000 UTC [ - ]

> The CSV format itself is notoriously inconsistent

So what? The format is so simple that the inconsistencies doesn't even matter and hell the format can be inferred from the data and that can even be automated. Unlike JSON/XML, most software can automatically make sense of CSV; JSON needs a developer to understand it to transform it into workable data.

> CSVs often begin life as exported spreadsheets or table dumps from legacy databases, and often end life as a pile of undifferentiated files in a data lake, awaiting the restoration of their precious metadata so they can be organized and mined for insights.

CSVs are the de facto way to export spreadsheets and if you have to do this then the problem are that you are still using spreadsheets in your data flow, not CSVs. Same goes for legacy systems.

Also CSVs comes from places where having more sophisticated data serializers is near impossible and the time to code, compute, parse and analyze CSVs is trivial compared to any other format out there.

> I’ve spent many years battling CSVs in various capacities. As an engineer at Trifacta, a leading data preparation tool vendor, I saw numerous customers struggling under the burden of myriad CSVs that were the product of even more numerous Excel spreadsheet exports and legacy database dumps. As an engineering leader in biotech, my team struggled to ingest CSVs from research institutions and hospital networks who gave us sample data as CSVs more often than not.

Of course if you are working in biotech then you'll work with scientists (horrible coders most of the time) and legacy systems because it's a domain that's very adverse to change that might break stuff. And it's not the CSV format's fault if it's used as the wrong tool for the job.

> Another big drawback of CSV is its almost complete lack of metadata.

Again, emphasis on the right tool for the job. For simple tasks metada are not needed.

Stratoscope 2021-08-19 04:55:00 +0000 UTC [ - ]

You mistakenly replied on the wrong thread. Here's the one you meant to reply on:

https://news.ycombinator.com/item?id=28221654

Your quote "The CSV format itself is notoriously inconsistent" comes from this page which the thread discusses:

https://www.bitsondisk.com/writing/2021/retire-the-csv/

knorker 2021-08-19 08:25:15 +0000 UTC [ - ]

"Modern" C++.

The second paragraph talks about setting a #define before including a header, as if this is still the 90's.

nlohmann 2021-08-19 12:08:56 +0000 UTC [ - ]

I would be happy to have a different way to decide if a member variable exists or not without touching anything else.

mike_hock 2021-08-19 08:49:49 +0000 UTC [ - ]

Unfortunately, this antipattern has persisted way beyond the 90s. But, you know, if the only way the standard provides to achieve a certain goal is a shitty hack, people will use the shitty hack.

knorker 2021-08-19 11:22:36 +0000 UTC [ - ]

It's definitely not the only way, though.

It can be a trade-off, but becoming less and less so. I'll admit to not looking at the implementation, but when you sell it as "modern" then it's not OK to do it for performance, for example.

And almost every single use I've seen "for performance" actually has zero or negligible performance impact.

a-dub 2021-08-19 10:27:25 +0000 UTC [ - ]

what's wrong with #define(s)?

einpoklum 2021-08-19 11:00:31 +0000 UTC [ - ]

The compiler is blind to them, so they can't be reasoned about / carried forward anywhere / allowed for in one way or another.

Moreover, because of `#define`s, the effect of including a file can be entirely different, so a compiler can never say "Oh, I know what's in this include file already, I don't need to include it again in other translation units".

Of course #define's have many uses, and at present cannot be avoided entirely, but C++ has been making an effort to have other mechanisms put in place so as to obviate the use of #define's

In particular, look for CppCon talks about Modules.

quietbritishjim 2021-08-19 11:32:00 +0000 UTC [ - ]

Of course you'd get your compiler to globally define that symbol rather than literally using a #define. Assuming you're using CMake, that means using target_compile_definitions()