
DATA ENGINEERING CONSULTANT
About NutaNXT: NutaNXT is a next-gen Software Product Engineering services provider building ground-breaking products using AI/ML, Data Analytics, IOT, Cloud & new emerging technologies disrupting the global markets. Our mission is to help clients leverage our specialized Digital Product Engineering capabilities on Data Engineering, AI Automations, Software Full stack solutions and services to build best-in-class products and stay ahead of the curve. You will get a chance to work on multiple projects critical to NutaNXT needs with opportunities to learn, develop new skills,switch teams and projects as you and our fast-paced business grow and evolve. Location: Pune Experience: 6 to 8 years
Job Description: NutaNXT is looking for supporting the planning and implementation of data design services, providing sizing and configuration assistance and performing needs assessments. Delivery of architectures for transformations and modernizations of enterprise data solutions using Azure cloud data technologies. As a Data Engineering Consultant, you will collect, aggregate, store, and reconcile data in support of Customer's business decisions. You will design and build data pipelines, data streams, data service APIs, data generators and other end-user information portals and insight tools.
Mandatory Skills: -
- Demonstrable experience in enterprise level data platforms involving implementation of end-to-end data pipelines with Python or Scala - Hands-on experience with at least one of the leading public cloud data platforms (Ideally Azure)
- - Experience with different Databases (like column-oriented database, NoSQL database, RDBMS)
- - Experience in architecting data pipelines and solutions for both streaming and batch integrations using tools/frameworks like Azure Databricks, Azure Data Factory, Spark, Spark Streaming, etc
- . - Understanding of data modeling, warehouse design and fact/dimension concepts - Good Communication
Good To Have:
Certifications for any of the cloud services (Ideally Azure)
• Experience working with code repositories and continuous integration • Understanding of development and project methodologies
Why Join Us?
We offer Innovative work in AI & Data Engineering Space, with a unique, diverse workplace environment having a Continuous learning and development opportunities. These are just some of the reasons we're consistently being recognized as one of the best companies to work for, and why our people choose to grow careers at NutaNXT. We also offer a highly flexible, self-driven, remote work culture which fosters the best of innovation, creativity and work-life balance, market industry-leading compensation which we believe help us consistently deliver to our clients and grow in the highly competitive, fast evolving Digital Engineering space with a strong focus on building advanced software products for clients in the US, Europe and APAC regions.

Similar jobs
REVIEW CRITERIA:
MANDATORY:
- Strong Senior/Lead DevOps Engineer Profile
- Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
PREFERRED:
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- Candidates from NCR region only (No outstation candidates).
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.
We’re Hiring: Pricing Analyst 💼
📍 Location: Hyderabad
🕘 Timings: 9:30 AM – 6:30 PM | 🗓️ 5 Days Working
💼 Experience: 2+ Years
🏢 Industry: Design | Hospitality | Facade
✨ Key Responsibilities:
📊 Analyze market trends, competitor pricing & cost structures
💡 Recommend optimal pricing strategies for profitability
📈 Monitor margins, costs & sales performance
🤝 Collaborate with Sales, Procurement & Finance teams
📑 Prepare pricing models & forecasts for business planning
📉 Support contract negotiations with data-driven insights
🎓 Requirements:
✅ Bachelor’s degree (any field)
💪 2+ yrs in Pricing / Financial / Business Analysis
📘 Strong Excel & analytical skills
🗣️ Excellent communication & presentation abilities
📩 Share your resume
Byteridge is currently hiring for Manual Test Engineers with experience in the ERP domain and strong hands-on skills in API testing using Postman.
Role: QA Engineer (Manual Testing)
Domain: ERP
Requirement: Must have experience with Postman for API testing
Location/Mode: Remote
About the company:
Byteridge is a digital product design and development company that builds tailored web and mobile app solutions for global clients. Recognized as a Great Place to Work for 2025-26, Byteridge is where talented professionals work on exciting, impactful products in a collaborative and growth-driven environment.
Hi,
Greetings from Coders Brain Technology Pvt. Ltd.
Coders Brain is a global leader in its services, digital, and business solutions that partners with its clients to simplify, strengthen, and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise, and a global network of innovation and delivery centers.
Location: Delhi
Client Name: Infogain
Position: Permanent with Coders Brain Technology Pvt. Ltd.
Experience: 4+ Years
Notice Period: Immediate or 15 days
Role: Dot Net Developer
- Must Have .NETdevelopers with experience on .NETCore, ORM tool any of (Linq/ Dapper/NHibernate), Restful service, Azure PaaS
• years of hands-on experience in developing .Net applications.
• Good understanding of coding standards
• Ability to work on applications from scratch, enhancements, defect fixes and releases.
Supporting today’s data driven business world, our client acts as a full-stack data intelligence platform which leverages granular and deep data from various sources, thus helping the decision-makers at the executive level. Their solutions include supply chain optimization, building footprints, track construction hotspots, real estate, and lots more.
Their work embeds geospatial analytics, location intelligence and predictive modelling in the foundations of economic modelling and evaluation theory to build data intelligence layers for their clients which include governments, multilateral institutions, and private organizations.
Headquartered in New Delhi, our client works as a team of economists, data scientists, geo-spatial analysts, etc. Their decision-support system includes Big-Data, predictive modeling, forecasting, socio economic dataset and many more.
As a Talent Acquisition Specialist, you will be responsible for coordinating with hiring managers to identify staffing needs and precise requirements.
What you will do:
- Supporting in building JDs & other hiring materials
- Devising and implementing sourcing strategies to build pipelines of potential applicants, including coordination with hiring agencies
- Creating and implementing end-to-end candidate hiring processes
- Working with hiring managers to ensure clear candidate/ interviewer expectations
- Building and managing onboarding protocols and checklists
Desired Candidate Profile
- MBA (HR) preferred
- Non-MBAs with 2+ years of experience in related fields can also apply
- Work experience in talent acquisition teams of start-ups or technology companies
- Familiarity with hiring portals & professional networks like LinkedIn
Requirement
- Strong knowledge of Php frameworks (Laravel, Zend must).
- Strong knowledge of front-end technologies, such as react js or Vue js.
- Knowledge of Object-Oriented Programming.
- Write effective APIs.
- Write technical documentation.
- Familiarity with SQL/No SQL databases.
- Understanding of code versioning tools like Github.
- Take full responsibility for task/project execution
- Good problem-solving skills, Data structures, and Algorithms.
- Should have experience in Webservices (SOAP/REST) and JSON
- Develop and deploy new features to facilitate related procedures and tools if necessary.
- Excellent communication and teamwork skills.
- Work with us as a team leader.
Senior Team Lead, Software Engineering (96386)
Role: Senior Team Lead
Skills: Has to be an expert in these -
- Java
- Microservices
- Hadoop
- People Management Skills.
Will be a plus if knowledge on -
AWS
Location: Bangalore India – North Gate.
Job Summary
- 5 to 8 years of experience with Python, and well versed with RDBMS (SQL Server preferred).
- Should have good experience in Data Structures, Algorithms, NumPy, and Pandas.
- Familiar with JSON and REST APIs
- Strong knowledge of object-oriented and parallel programming techniques
- Experience with test-driven development (TDD)
- Excellent analytical and problem-solving skills
- Good interpersonal skills
- Good team player
Skills:
Python Developer
Python
API
RDBMS
- Design and develop cloud native enterprise applications
- Define and establish conventions, standards and best practices for the SDLC process and ensure that quality control is of paramount importance at each step of the development life cycle
- Develop reusable frameworks and libraries that can drastically accelerate new application development in the future
- Participate in the requirement analysis and gathering process and also perform sit-withs with business teams to ensure there is full clarity of the problem statement
- Actively engage wih infrastructure team and take ownership of DevOps processes to ensure that build and deployment processes are efficient and optimal.
- Actively engage with project stakeholders to ensure all are in sync with the progress, risks and issues
- Quickly learn and adopt cutting edge technologies to help keep the org ahead of the curve







