Computer Vision
for everyone

Model, train and deploy your computer vision solutions in no time.

Build your Computer Vision solution in a fluid and intuitive way

01

Download

Ikomia Studio to apply models to your images or videos in 1 click.

02

Import

and easily open your images, videos, video streams and annotated datasets.

03

Model

your idea quickly with state-of-the-art models and create your first PoC.

04

Deploy

your CV solution on cloud or embedded system with Ikomia API.

01

Download

Ikomia Studio to apply models to your images or images in 1 click.

02

Import

and easily open your images, videos, video streams and annotated datasets.

03

Model

your idea quickly with state-of-the-art models and create your first PoC.

04

Deploy

your CV solution on cloud or embedded system with Ikomia API.

3 tools for a complete solution

Ikomia Studio

Ikomia Library

Ikomia API

Apply an algorithm on your data in 1 click.

Keep an eye on your workflow.

Evaluate the results.

Easily configure your models

TransUNet
TransUNet

Inférence TransUNet pour la segmentation sémantique.

Papier : TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A-L. Yuille, Y Zhou. Preprint 2021

Code original : github.com/Beckschen/TransUNet

Detectron2 DeepLabV3Plus

Inférence DeepLabv3+ de Detectron2 pour la segmentation sémantique.

Papier : Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam. ECCV 2018

Code original : github.com/facebookresearch/detectron2

Ikomia API

Ikomia API is a Python library that allows you to easily prototype your Computer Vision application. Whether it’s a workflow from Ikomia Studio or a workflow from scratch, easily run state-of-the-art algorithms on any type of machine (desktop computer, calculation servers, cloud computing) in a few lines of code.

Ikomia Library

TransUNet
Inférence TransUNet pour la segmentation sémantique.

Papier : TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A-L. Yuille, Y Zhou. Preprint 2021

Code : github.com/Beckschen/TransUNet

Inférence TransUNet pour la segmentation sémantique.

Modèle d’inférence DeepLabv3+ de Detectron2 pour la segmentation sémantique.

Papier : Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam. ECCV 2018

Code : github.com/facebookresearch/detectron2

Ikomia API

Ikomia API

Ikomia API is a Python library that allows you to easily prototype your Computer Vision application. Whether it’s a workflow from Ikomia Studio or a workflow from scratch, easily run state-of-the-art algorithms on any type of machine (desktop computer, calculation servers, cloud computing) in a few lines of code

Find, test, integrate and deploy the best Computer Vision algorithms

The best state-of-the-art algorithms in 1 clic

We select the best state-of-the-art algorithms available in Open Source in all major fields of Computer Vision (classification, detection, segmentation, pose…), and we test them for you.

No-code tool for junior developers

 

Use all our algorithms without writing a single line of code.

Test OpenCV in 1 click or start using deep learning algorithms effortlessly.

Low-code solution for experienced developers

Integrate your own algorithms into Ikomia Studio to test, compare and improve them.

Then run them anywhere, on any machine.

Easily deploy your algorithms with the API

Our API allows you to deploy your Computer Vision applications on any remote computing server (Google Collab, AWS, GCP…) or on your own servers.

Start your project

Links

Fichier 2

Python API documentation

Learn and create your first Ikomia Python app with our API. Enjoy the Ikomia tools.

GitHub-Logo

Ikomia GitHub

Ikomia Studio (AGPLv3), Ikomia API (LGPL) and all our models are open source. Visit our GitHub repo for more information.