




ML ARCHITECT
Job Overview
We are looking for a ML Architect to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. They must have strong experience using variety of data mining and data analysis methods, building and implementing models, using/creating algorithm’s and creating/running simulations. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Automating to identify the textual data with their properties and structure form various type of document.
Responsibilities
- Selecting features, building and optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Creating automated anomaly detection systems and constant tracking of its performance
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Secure and manage when needed GPU cluster resources for events
- Write comprehensive internal feedback reports and find opportunities for improvements
- Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model.
Skills and Qualifications
- Strong Hands-on experience in Python Programming
- Working experience with Computer Vision models - Object Detection Model, Image Classification
- Good experience in feature extraction, feature selection techniques and transfer learning
- Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.
- Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.
- Good experience in exploratory data analysis, data visualisation, and other data preprocessing techniques.
- Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe
- Good knowledge in statistics,distribution of data and in supervised and unsupervised machine learning algorithms.
- Exposure to OpenCV Familiarity with GPUs + CUDA
- Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.
- We are looking for a candidate with 14+ years of experience, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
- Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.

About codeMantra
About
Connect with the team
Similar jobs


AccioJob is conducting an offline hiring drive with OneLab Ventures for the position of:
- AI/ML Engineer / Intern - Python, Fast API, Flask/Django, PyTorch, TensorFlow, Scikit-learn, GenAI Tools
Apply Now: https://links.acciojob.com/44MJQSB
Eligibility:
- Degree: BTech / BSc / BCA / MCA / MTech / MSc / BCS / MCS
- Graduation Year:
- For Interns - 2024 and 2025
- For experienced - 2024 and before
- Branch: All Branches
- Location: Pune (work from office)
Salary:
- For interns - 25K for 6 months and 5- 6 LPA PPO
- For experienced - Hike on the current CTC
Evaluation Process:
- Assessment at AccioJob Pune Skill Centre.
- Company side process: 2 rounds of tech interviews (Virtual +F2F) + 1 HR round
Apply Now: https://links.acciojob.com/44MJQSB
Important: Please bring your laptop & earphones for the test.


Data engineers:
Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.This would also include develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
Constructing infrastructure for efficient ETL processes from various sources and storage systems.
Collaborating closely with Product Managers and Business Managers to design technical solutions aligned with business requirements.
Leading the implementation of algorithms and prototypes to transform raw data into useful information.
Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations.
Creating innovative data validation methods and data analysis tools.
Ensuring compliance with data governance and security policies.
Interpreting data trends and patterns to establish operational alerts.
Developing analytical tools, utilities, and reporting mechanisms.
Conducting complex data analysis and presenting results effectively.
Preparing data for prescriptive and predictive modeling.
Continuously exploring opportunities to enhance data quality and reliability.
Applying strong programming and problem-solving skills to develop scalable solutions.
Writes unit/integration tests, contributes towards documentation work
Must have ....
6 to 8 years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.
High proficiency in Scala/Java/ Python API frameworks/ Swagger and Spark for applied large-scale data processing.
Expertise with big data technologies, API development (Flask,,including Spark, Data Lake, Delta Lake, and Hive.
Solid understanding of batch and streaming data processing techniques.
Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion.
Expert-level ability to write complex, optimized SQL queries across extensive data volumes.
Experience with RDBMS and OLAP databases like MySQL, Redshift.
Familiarity with Agile methodologies.
Obsession for service observability, instrumentation, monitoring, and alerting.
Knowledge or experience in architectural best practices for building data pipelines.
Good to Have:
Passion for testing strategy, problem-solving, and continuous learning.
Willingness to acquire new skills and knowledge.
Possess a product/engineering mindset to drive impactful data solutions.
Experience working in distributed environments with teams scattered geographically.
Procurement
Project Management
Preventive Maintenance
Supervision

6-9 years of relevant experience in building webapps at scale
You must have strong understanding of semantic HTML / HTML5, CSS / CSS3.
You must have a good understanding of MVC architecture.
Prior work experience in ReactJS is Must
You must have experience in setting up the full UI workflow layer right from Development, Testing, Building and Deployment.
Never give up attitude
Experience in frameworks like Bootstrap, Foundation and CSS pre-processors like SASS and LESS is desirable.
You should have exposure to page speed improvement techniques.
Exposure of building responsive websites at scale will be a plus.
Prior exposure to building React Native components for Hybrid mobile apps will be a plus
Prior work experience in Angular, Backbone along with ReactJs is desirable.
Good understanding of webpack and redux
This person MUST have:
- Min of 3-5 prior experience as a DevOps Engineer.
- Expertise in CI/CD pipeline maintenance and enhancement specifically Jenkins based pipelines.
- Working experience with engineering tools like git, git work flow, bitbucket, JIRA etc
- Hands-on experience deploying and managing infrastructure with CloudFormation/Terraform
- Experience managing AWS infrastructure
- Hands on experience of Linux administration.
- Basic understanding of Kubernetes/Docker orchestration
- Works closely with engineering team for day to day activities
- Manges existing infrastructure/Pipelines/Engineering tools (On Prem or AWS) for engineering team (Build servers/Jenkin nodes etc.)
- Works with engineering team for new config required for infra like replicating the setups, adding new resources etc.
- Works closely with engineering team for improving existing pipelines for build .
- Troubleshoots problems across infrastructure/services
Experience:
- Min 5-7 year experience
Location
- Remotely, anywhere in India
Timings:
- 40 hours a week (11 AM to 7 PM).
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.

- Responsible for the successful technical delivery of code and design documents
- Responsible to evaluate and review design frameworks and methodologies.
- Responsible for organizing, documenting, and review work items scope from a technical point of view.
- Responsible for scoping, designing, development, reviews, and complete delivery from the technical end.
- Review and validate estimates for complex projects to ensure correct sizing of work.
- Participates in POCs, validates the complex technical solution, performs estimates and collateral consolidation.
- Lead and coach a team of junior developers
Requirement
- Strong experience in the development of microservices-based applications using JEE technologies Spring Boot, and Spring MVC, JPA, and Hibernate.
- Experience in handling integration with multiple legacy systems by creating/consuming services in SOAP/REST/MQ, Apache Camel integration framework.
- Strong experience in developing web applications using single-page architecture and a responsive design using Angular, Bootstrap, and building hybrid mobile applications
- Understanding of front-end technologies, such as HTML5, and CSS3
- Containerizing applications using Docker/Kubernetes
- Setting up CI/CD pipelines using GIT, Jenkins, SonarQube is a plus
- Setting up applications for log monitoring using ELK stack and performance monitoring using Prometheus and Grafana is a plus
- Ensure quality and timeliness of implementation of activities related to design, build and implementation of work product; participate in activities related to requirements elicitation, validation of architecture, creation, and review of design; provide pseudo code to the team; assign and review tasks for work product implementation with the objective of ensuring the highest levels of service offerings in own technology domain
- Performs high and low-level design provides pseudo-codes, implements the prototype, and does design reviews in order to deliver design documents as per customer requirements
- Provides inputs for an overall implementation plan, lead deployment of applications, infrastructure, and post-production support activities
- Interface with customer for issue resolution, provide status updates, Build customer confidence in team`s ability to deliver in order to support high customer satisfaction
- Understand client-side business requirements and provide value-led solutions. Should have strong and clear verbal and written communication skills including addressing escalations, presenting status in management meetings. Excellent client interfacing skills, mentoring skills
- Experienced in developing microservices/APIs using REST principles
- Experienced in JEE technologies, Spring Boot, Spring MVC, Hibernate, Swagger
- Experienced in using Queues (ActiveMQ/RabbitMQ, Kafka) and integration frameworks like Apache Camel
- Experienced in Front End skills include Angular, CSS, SAAS, Node, NPM, Bootstrap
- Experience with one of the application servers like Tomcat, JBoss EAP, IBM WebSphere
- Experience with relational databases like Postgres, Oracle DB2, and NoSQL databases like Elastic Search, MongoDB, Neo4J
- Experience in containerizing applications using Docker/Kubernetes
- Experience in setting up CI/CD pipelines using GIT, Jenkins, SonarQube is a plus
- Experience in setting up applications for log monitoring using ELK stack and performance monitoring using Prometheus and Grafana is a plus


- Experience with building scalable applications on android and IOS
- Experience in working with Dart.
- Knowledge of unit & integration testing
- Knowledge of agile development process, jira
- Knowledge of API integration
- Strong UI building skills
- Experience with version control systems (bitbucket, git etc.)
- Strong knowledge of algorithms and Data structures
- Demonstrated experience working on application development projects and
- Test-driven development. Experience in writing high quality code
- Experience in Fintech domain will be another added advantage
- Experience in developing mobile apps in Flutter.
- Strong knowledge of Architectural pattern like Bloc, Provider etc in Flutter.
- Ability to think about scalability and reusability while developing flutter widgets.
- Ability to handle updates in UI with high frequency data changes.
- Knowledge of iOS application deployment.
- Strong state management knowledge.
- Knowledge of writing Plugins in Flutter is a good to have skill.


