

The inference speed is ~28FPS by a RTX 2080Ti GPU. Tune the implementation adaptive to datasets may get higher performance.

Results are from reusing detections of previous methods and shared hyper-parameters. : The arxiv preprint of OC-SORT is released.: A preview version is released after a primary cleanup and refactor.: Support intergration with BYTE and multiple cost metrics, such as GIoU, CIoU, etc.Performance on more datasets to be verified. The mmtracking version is still in-preview. If you want to do tracking with more advanced and customizable experience, you may want to give it a try. Significant performance improvement on MOT17, MOT20 and DanceTrack. : Deep-OC-SORT, a combination of OC-SORT and deep visual appearance, is released on Github and Arxiv.We made intensive revision of the paper writing. We rename OOS to be "Observation-centric Re-Update" (ORU). : We update the preprint version on Arxiv.It remains, Simple, Online and Real-time. It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It is designed by recognizing and fixing limitations in Kalman filter and SORT.

It aims to improve tracking robustness in crowded scenes and when objects are in non-linear motion. Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker.
