EnliteAI is a technology provider specializing in Reinforcement Learning and Computer Vision/GeoAI. They offer solutions for power grid optimization using reinforcement learning and GeoAI technology leveraging mobile mapping data to identify road signs, markings, and defects. Their services include AI strategy and transformation programs, AI research and development, and prototyping and project delivery. They have an open-source framework for applied Reinforcement Learning called Maze. Notable clients include A1 Telekom Austria Group, Andritz AG, Audi AG, and many others.
enliteAI provides AI solutions using Reinforcement Learning and Computer Vision, specializing in power grid optimization and road infrastructure analysis.
International product and innovation-driven team with rich expertise in Computer Vision and Reinforcement Learning as well as distributed training, data engineering, ML ops and cloud architectures.
Working with the latest technologies at the interface between research and industry (enliteAI is an ELISE EU research network Organizing Node).
Personal growth: Receive continuous training and education opportunities. Budget and time allotment for the pursuit of individual R&D projects, training or conference participations.
Flexible work models: Remote work, an office in Vienna's 1st district and minimal core hours.
Experience Requirements:
3+ years of work experience in data-driven environments.
Other Requirements:
* Passionate about everything related to AI, Machine Learning and Computer Vision.
* Python programming skills, emphasis on data engineering and distributed processing (e.g. Flask, Postgres, SQLalchemy, Airflow).
* Proficiency in working with databases and data storage solutions.
* Experience with Kubernetes and Docker (Helm, Terraform, Amazon Kubernetes Service).
* Familiarity with cloud environments (AWS, Gcloud, Azure).
* Used to mature workflows in software development (Git, issue management, documentation, unit testing, CI/CD).
* Fluent in English both spoken and in written language.
* Valid work permit for Austria.
Responsibilities:
* Collaborate with our machine learning and backend engineers to design and manage scalable processing pipelines in productive environments.
* Implement robust processing flows and I/O-efficient data structures, powering use cases such as road surface analysis, sign detection and localization on large volumes of point cloud and imagery data.
* Design and manage relevant database schemes in close collaboration with backend engineering.
* Create and maintain comprehensive documentation of the processing pipelines, database schemes, configuration and software architecture.
* Collaborate with our machine learning engineers and data engineers to publish our models into processing pipelines and deploy to productive environments.
* Design and manage our GPU & CPU server infrastructure, from on-prem Kubernetes clusters to cloud deployments.
* Manage and orchestrate the data pipelines and data storage systems and associated synchronization processes for model training and execution.
* Own our CI/CD pipelines (based on Gitlab).
* Establish monitoring of model, pipeline and infrastructure health. Set up logging to capture relevant information for debugging and auditing. Enforce security best practices to safeguard both the models and the data they process.
* Create and maintain comprehensive documentation of the infrastructure, deployments, configuration and system architecture.
#J-18808-Ljbffr