dlib is a powerful library that has a large focus on portability as well as application correctness.
dlib includes a platform abstraction layer for common tasks such as interfacing with network services, handling threads, or creating graphical user interfaces.
NOTE: dlib is licensed and distributed under the terms of the Boost Software License (BSL1.0).
Here are some key features of "dlib":
- Very portable
- All non ISO C++ code is located in the OS abstraction layers which are as small as possible (about 9% of the library). The rest of the objects in the library are either layered on top of the OS abstraction layer or are pure ISO C++.
- Big/little endian agnostic.
- No assumptions are made about structure byte packing.
- Many container classes. What makes these containers different from what can be found in the STL is how they move objects into and out of themselves. Rather than copying things around everything is moved around by swapping. This allows you to do things like have containers of containers of containers. They also have simpler interfaces.
- There are many versions of each container with different performance characteristics so you have great flexibility in choosing exactly what you want.
- Many of the containers perform all their allocations through the memory_manager object and unlike the STL there is no requirement that different instances of the memory manager/allocator be able to free objects allocated from each other. This allows for much more interesting memory manager implementations.
- All containers are serializable.
- New Features:
- Added Python interfaces to dlib's structural support vector machine solver and Hungarian algorithm implementation.
- Added running_cross_covariance
- Added order_by_descending_distance()
- Added is_finite()
- Added the csv IO manipulator that lets you print a matrix in comma separated value
- Non-Backwards Compatible Changes:
- Changed the object detector testing functions to output average precision instead of
- mean average precision.
- Added an option to weight the features from a hashed_feature_image relative to the
- number of times they occur in an image. I also made it the default behavior to use
- this relative weighting and changed the serialization format to accommodate this.
- Bug fixes:
- Fixed typo in learn_platt_scaling(). The method wasn't using the exact prior
- suggested by Platt's paper.
- Fixed a bug in running_scalar_covariance that caused the covariance() and
- correlation() methods to output the wrong answer if the covariance was negative.
- Gave the image_wind...