

About Nextalytics Software Services Pvt Ltd
Similar jobs

Review Criteria
- Strong Senior/Lead DevOps Engineer Profile
- 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
Preferred
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
Job Specific Criteria
- CV Attachment is mandatory
- Are you okay to come F2F for HM Interview round?
- Reason for Change?
- Provide CTC Breakup?
- Please provide Career Summary / Skills?
- How many years of experience you have in AWS?
Role & Responsibilities
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
Key Responsibilities:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
Ideal Candidate
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
Job Title: Senior AI/ML/DL Engineer
Location: Hyderabad
Department: Artificial Intelligence/Machine Learning
Job Summary:
We are seeking a highly skilled and motivated Senior AI/ML/DL Engineer to contribute to
the development and implementation of advanced artificial intelligence, machine learning,
and deep learning solutions. The ideal candidate will have a strong technical background in
AI/ML/DL, hands-on experience in building scalable models, and a passion for solving
complex problems using data-driven approaches. This role involves working closely with
cross-functional teams to deliver innovative AI/ML solutions aligned with business objectives.
Key Responsibilities:
Technical Execution:
● Design, develop, and deploy AI/ML/DL models and algorithms to solve business
challenges.
● Stay up-to-date with the latest advancements in AI/ML/DL technologies and integrate
them into solutions.
● Implement best practices for model development, validation, and deployment.
Project Development:
● Collaborate with stakeholders to identify business opportunities and translate them
into AI/ML projects.
● Work on the end-to-end lifecycle of AI/ML projects, including data collection,
preprocessing, model training, evaluation, and deployment.
● Ensure the scalability, reliability, and performance of AI/ML solutions in production
environments.
Cross-Functional Collaboration:
● Work closely with product managers, software engineers, and domain experts to
integrate AI/ML capabilities into products and services.
● Communicate complex technical concepts to non-technical stakeholders effectively.
Research and Innovation:●
Explore new AI/ML techniques and methodologies to enhance solution capabilities.
● Prototype and experiment with novel approaches to solve challenging problems.
●Contribute to internal knowledge-sharing initiatives and documentation.
Quality Assurance & MLOps:
● Ensure the accuracy, robustness, and ethical use of AI/ML models.
● Implement monitoring and maintenance processes for deployed models to ensure long-term performance.
● Follow MLOps practices for efficient deployment and monitoring of AI/ML solutions.
Qualifications:
Education:
● Bachelors/Master’s or Ph.D. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field.
Experience:
● 5+ years of experience in AI/ML/DL, with a proven track record of delivering AI/ML solutions in production environments.
● Strong experience with programming languages such as Python, R, or Java.
● Proficiency in AI/ML frameworks and tools (e.g., TensorFlow, PyTorch, Scikit-learn,Keras).
● Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies
(e.g., Hadoop, Spark).
● Familiarity with MLOps practices and tools for model deployment and monitoring.
Skills:
● Strong understanding of machine learning algorithms, deep learning architectures,
and statistical modeling.
● Excellent problem-solving and analytical skills.
● Strong communication and interpersonal skills.
● Ability to manage multiple projects and prioritize effectively.
Preferred Qualifications:
● Experience in natural language processing (NLP), computer vision, or reinforcement
learning.
● Knowledge of ethical AI practices and regulatory compliance.
● Publications or contributions to the AI/ML community (e.g., research papers,open-source projects).
What We Offer:
● Competitive salary and benefits package.
● Opportunities for professional development and career growth.
● A collaborative and innovative work environment.
● The chance to work on impactful projects that leverage cutting-edge AI/ML technologies.
- Strong Snowflake Cloud database experience Database developer.
- Knowledge of Spark and Databricks is desirable.
- Strong technical background in data modelling, database design and optimization for data warehouses, specifically on column oriented MPP architecture
- Familiar with technologies relevant to data lakes such as Snowflake
- Candidate should have strong ETL & database design/modelling skills.
- Experience creating data pipelines
- Strong SQL skills and debugging knowledge and Performance Tuning exp.
- Experience with Databricks / Azure is add on /good to have .
- Experience working with global teams and global application environments
- Strong understanding of SDLC methodologies with track record of high quality deliverables and data quality, including detailed technical design documentation desired
Senior Software Engineer - 221254.
We (the Software Engineer team) are looking for a motivated, experienced person with a data driven approach to join our Distribution Team in Budapest or Szeged to help design, execute and improve our test sets and infrastructure for producing high-quality Hadoop software.
A Day in the life
You will be part of a team that makes sure our releases are predictable and deliver high value to the customer. This team is responsible for automating and maintaining our test harness, and making test results reliable and repeatable.
You will…
•work on making our distributed software stack more resilient to high-scale endurance runs and customer simulations
•provide valuable fixes to our product development teams to the issues you’ve found during exhaustive test runs
•work with product and field teams to make sure our customer simulations match the expectations and can provide valuable feedback to our customers
•work with amazing people - We are a fun & smart team, including many of the top luminaries in Hadoop and related open source communities. We frequently interact with the research community, collaborate with engineers at other top companies & host cutting edge researchers for tech talks.
•do innovative work - Cloudera pushes the frontier of big data & distributed computing, as our track record shows. We work on high-profile open source projects, interacting daily with engineers at other exciting companies, speaking at meet-ups, etc.
•be a part of a great culture - Transparent and open meritocracy. Everybody is always thinking of better ways to do things, and coming up with ideas that make a difference. We build our culture to be the best workplace in our careers.
You have...
•strong knowledge in at least 1 of the following languages: Java / Python / Scala / C++ / C#
•hands-on experience with at least 1 of the following configuration management tools: Ansible, Chef, Puppet, Salt
•confidence with Linux environments
•ability to identify critical weak spots in distributed software systems
•experience in developing automated test cases and test plans
•ability to deal with distributed systems
•solid interpersonal skills conducive to a distributed environment
•ability to work independently on multiple tasks
•self-driven & motivated, with a strong work ethic and a passion for problem solving
•innovate and automate and break the code
The right person in this role has an opportunity to make a huge impact at Cloudera and add value to our future decisions. If this position has piqued your interest and you have what we described - we invite you to apply! An adventure in data awaits.
Our mission is to make e-commerce easy and accessible for everyone. We believe that all businesses should have the equal playing field in terms of resources to build and grow their online business. With our eCommerce tools and comprehensive platform, business owners can better understand, analyze and grow their business. Starting a business and making products is hard enough; selling products and growing the business shouldn’t be.
We’re a small team looking for passionate, execution-focused, self-starters to help us build the next generation eCommerce platform and equal the playing field for all. Our success depends on building teams who can challenge each other's assumptions with fresh perspectives. To that end, we don’t just accept differences – we celebrate them. If that sounds exciting to you, let’s talk!
We are expanding our Engineering team to India and building a stellar and diverse team composed of owners. We are looking for a backend engineer with a demonstrated track record of developing and maintaining production services, innovative thinking, and technical excellence. As a backend engineer, you will be responsible for building out the service layer that powers our frontend applications. You should have a solid understanding of software development lifecycle and software design principles. This is a great opportunity if you are looking for a huge impact at a small start-up with immense growth potential
What you will do?
Create technical plans of projects assigned to you.
Come up with well-structured solutions to ambiguous problems and implement them.
Ship high-quality, well-tested, secure, and maintainable backend code.
Provide technical direction on our various products and upcoming projects.
Champion reliability and quality by using best practices in software engineering and modular design.
Ensure all components are scalable, maintainable, and have in-built metrics instrumentation and monitoring
Deliver an exceptional user experience to our customers. Put the customer first and have quality in mind.
Own the full release cycle from development to deployment
What you will need?
3+ Years of experience as a software engineer working on backend applications.
Advanced knowledge of Python (preferably, the Django framework) and relational databases.
Experience with agile, test-driven development, continuous integration, and automated testing.
Experience with building, modifying, and extending API endpoints (REST or GraphQL) for data retrieval and persistence.
Experience with the full software development life cycle, including requirements collection, design, implementation, testing, and operational support.
Excellent verbal and written communication, teamwork, decision making and influencing skills.
Experience with scrum or other agile software development methodology.
Hustle. Thrives in an evolving, fast paced, ambiguous work environment
Bonus if you have
Experience working in the eCommerce domain.
Experience with AWS technologies like Elastic Beanstalk, Amplify, etc.
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.
We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
Skills:
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.
We are building a global content marketplace that brings companies and content
creators together to scale up content creation processes across 50+ content verticals and 150+ industries. Over the past 2.5 years, we’ve worked with companies like India Today, Amazon India, Adobe, Swiggy, Dunzo, Businessworld, Paisabazaar, IndiGo Airlines, Apollo Hospitals, Infoedge, Times Group, Digit, BookMyShow, UpGrad, Yulu, YourStory, and 350+ other brands.
Our mission is to become the world’s largest content creation and distribution platform for all kinds of content creators and brands.
Our Team
We are a 25+ member company and is scaling up rapidly in both team size and our ambition.
If we were to define the kind of people and the culture we have, it would be -
a) Individuals with an Extreme Sense of Passion About Work
b) Individuals with Strong Customer and Creator Obsession
c) Individuals with Extraordinary Hustle, Perseverance & Ambition
We are on the lookout for individuals who are always open to going the extra mile and thrive in a fast-paced environment. We are strong believers in building a great, enduring
a company that can outlast its builders and create a massive impact on the lives of our
employees, creators, and customers alike.
Our Investors
We are fortunate to be backed by some of the industry’s most prolific angel investors - Kunal Bahl and Rohit Bansal (Snapdeal founders), YourStory Media. (Shradha Sharma); Dr. Saurabh Srivastava, Co-founder of IAN and NASSCOM; Slideshare co-founder Amit Ranjan; Indifi's Co-founder and CEO Alok Mittal; Sidharth Rao, Chairman of Dentsu Webchutney; Ritesh Malik, Co-founder and CEO of Innov8; Sanjay Tripathy, former CMO, HDFC Life, and CEO of Agilio Labs; Manan Maheshwari, Co-founder of WYSH; and Hemanshu Jain, Co-founder of Diabeto.
Backed by Lightspeed Venture Partners
Job Responsibilities:
● Design, develop, test, deploy, maintain and improve ML models
● Implement novel learning algorithms and recommendation engines
● Apply Data Science concepts to solve routine problems of target users
● Translates business analysis needs into well-defined machine learning problems, and
selecting appropriate models and algorithms
● Create an architecture, implement, maintain and monitor various data source pipelines
that can be used across various different types of data sources
● Monitor performance of the architecture and conduct optimization
● Produce clean, efficient code based on specifications
● Verify and deploy programs and systems
● Troubleshoot, debug and upgrade existing applications
● Guide junior engineers for productive contribution to the development
The ideal candidate must -
ML and NLP Engineer
● 4 or more years of experience in ML Engineering
● Proven experience in NLP
● Familiarity with language generative model - GPT3
● Ability to write robust code in Python
● Familiarity with ML frameworks and libraries
● Hands on experience with AWS services like Sagemaker and Personalize
● Exposure to state of the art techniques in ML and NLP
● Understanding of data structures, data modeling, and software architecture
● Outstanding analytical and problem-solving skills
● Team player, an ability to work cooperatively with the other engineers.
● Ability to make quick decisions in high-pressure environments with limited information.







