DevOps Engineer

Open positions

Global

Contract / Contract to full time

Remote

About the team

 At Neural, we are committed to building the future of AI. The world is changing, and we're at the forefront of that transformation. Our team is dedicated to creating innovative solutions that address the unique challenges of today's dynamic industries and unlock the potential of new markets. 

We harness the power of Artificial Intelligence and Machine Learning to drive innovation and create solutions that shape the future of industries. We believe that the future of AI is in your hands. Our mission is to empower individuals and organizations to harness the power of AI to achieve their goals. Join us in shaping the future of AI today

About the position

Neural is seeking a highly skilled DevOps Engineer to join our growing team on a contract basis. You will play a pivotal role in ensuring that our AI, geospatial, and data-intensive platforms are reliable, scalable, and secure. As we continue to build cutting-edge tools that leverage spatial intelligence and machine learning, your contribution will directly impact our ability to deliver performant applications and services to customers in real-time.

This role is suited for someone who thrives in complex infrastructure environments and enjoys building automation workflows, managing containerized systems, and optimizing deployments across hybrid or multi-cloud setups. The ideal candidate has experience deploying and managing microservices, maintaining infrastructure-as-code, and working with ML or geospatial workloads in production. You’ll work closely with engineers, scientists, and product stakeholders to ensure the platform infrastructure supports evolving project demands without compromising stability or performance.

This position offers the opportunity to build and scale core DevOps capabilities in a high-impact, agile team environment while supporting mission-critical applications across government, environmental, and commercial domains.

Responsibilities

  • Build, monitor, and maintain CI/CD pipelines for geospatial and ML-based applications.
  • Manage containerized applications using Docker and Kubernetes across multiple environments.
  • Automate infrastructure provisioning and configuration using Terraform, Ansible, or similar tools.
  • Implement observability tooling (e.g., Prometheus, Grafana, CloudWatch) for proactive system monitoring.
  • Collaborate with software and ML teams to streamline deployment of services and models.
  • Optimize cloud infrastructure costs and performance (AWS, Azure, or GCP).
  • Implement security best practices for data protection, access controls, and compliance.

Qualification

  • 3+ years of experience in DevOps, Site Reliability Engineering (SRE), or Platform Engineering.
  • Strong proficiency with containerization and orchestration tools (Docker, Kubernetes).
  • Hands-on experience with infrastructure-as-code (Terraform, Pulumi, Ansible, etc.).
  • Proficiency in scripting languages such as Bash or Python.
  • Experience with CI/CD systems like GitHub Actions, GitLab CI, Jenkins, or Argo.
  • Working knowledge of Linux server administration and system security.
  • Familiarity with monitoring, logging, and alerting systems.
  • Experience working in cloud environments (AWS preferred).

Preferred Qualifications

  • Exposure to MLOps, model serving, and ML pipeline automation.
  • Experience supporting geospatial platforms or data pipelines.
  • Familiarity with tools like MLflow, Airflow, or DVC.
  • Prior work in early-stage or agile product development environments.

Travel

Up to 10%

Apply now
Max file size 10MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! If you are a great fit, we will be in touch soon.
Oops! Something went wrong while submitting the form.

Contact us

Find solutions with Neural AI technologies

Thank you! We will be in touch soon.
Oops! Something went wrong while submitting the form.