This role is all about harnessing the power of machine learning to automate critical aspects of Repli5’s tools for creating 3D worlds and digital twins. As a Computer Vision Engineer, you will have the opportunity to explore, test, and adapt the latest technologies in generative AI, unsupervised segmentation, and 3D reconstruction to address real-world challenges.
-Profile focus: A strong background in computer vision or machine learning applied to image content and 3D.
-Areas of knowledge: Familiarity with the foundations of image processing, projective geometry, classification, object detection, and segmentation. Expertise in using generative adversarial networks (GANs) or latent diffusion models (LDMs) is a plus. Experience in preparing models for production is also valued.
-Collaboration: Experience in working in a collaborative environment is essential
-Innovation: Comfortable understanding the methods and architectures of new models and identifying opportunities for adaptation and improvement. Practical implementation takes precedence, although deep theoretical knowledge is appreciated.
-Education: Ideally, you hold a Master's degree in a field closely related to Computer Science or Data Science.
-Experience: We welcome both professionals with a history of working in computer vision-related roles and fresh graduates who can demonstrate their skills through any relevant projects they have been involved in.
-Skills: Proficiency in Python and experience using libraries such as OpenCV, Pillow, Pytorch, Torchvision and Open3D. Knowledge of C++ and any other related libraries is highly valued too.
-Bonuses: Any experience in the field of autonomous driving or working with synthetic data is considered a bonus.
You will be collaborating in automating the creation of 3D worlds and digital twins using cutting-edge machine learning technologies. You will be expected to stay informed about the latest methods and architectures, actively identifying opportunities to enhance our tools, and taking proactive steps to implement these improvements.