We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Similar jobs
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
● Experience with big data tools: Hive/Hadoop, Spark, Kafka, Hive etc.
● Experience with querying multiple databases SQL/NoSQL, including
Oracle, MySQL and MongoDB etc.
● Experience in Redis, RabbitMQ, Elastic Search is desirable.
● Strong Experience with object-oriented/functional/ scripting languages:
Python(preferred), Core Java, Java Script, Scala, Shell Scripting etc.
● Must have debugging complex code skills, experience on ML/AI
algorithms is a plus.
● Experience in version control tool Git or any is mandatory.
● Experience with AWS cloud services: EC2, EMR, RDS, Redshift, S3
● Experience with stream-processing systems: Storm, Spark-Streaming,
etc
Data Warehousing Engineer - Big Data/ETL
at Marktine
Must Have Skills:
- Solid Knowledge on DWH, ETL and Big Data Concepts
- Excellent SQL Skills (With knowledge of SQL Analytics Functions)
- Working Experience on any ETL tool i.e. SSIS / Informatica
- Working Experience on any Azure or AWS Big Data Tools.
- Experience on Implementing Data Jobs (Batch / Real time Streaming)
- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies
Preferred Skills:
- Experience on Py-Spark / Spark SQL
- AWS Data Tools (AWS Glue, AWS Athena)
- Azure Data Tools (Azure Databricks, Azure Data Factory)
Other Skills:
- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search
- Knowledge on domain/function (across pricing, promotions and assortment).
- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),
- Knowledge on DQS and MDM.
Key Responsibilities:
- Independently work on ETL / DWH / Big data Projects
- Gather and process raw data at scale.
- Design and develop data applications using selected tools and frameworks as required and requested.
- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.
- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.
- Work closely with the engineering team to integrate your work into our production systems.
- Process unstructured data into a form suitable for analysis.
- Analyse processed data.
- Support business decisions with ad hoc analysis as needed.
- Monitoring data performance and modifying infrastructure as needed.
Responsibility: Smart Resource, having excellent communication skills
Data Engineer ( Only Immediate)
at StatusNeo
either one of Java, Scala or Python
Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/
platforms (Kafka/NiFi/Storm)
Experience in Distributed Search (Solr/Elastic Search), In-memory data-grid
(Redis/Ignite), Cloud native apps and Kubernetes is a plus
Experience in building REST services and API’s following best practices of service
abstractions, Micro-services. Experience in Orchestration frameworks is a plus
Experience in Agile methodology and CICD - tool integration, automation,
configuration management
Added advantage for being a committer in one of the open-source Bigdata
technologies - Spark, Hive, Kafka, Yarn, Hadoop/HDFS
Machine Learning Engineer
at IDfy
● The machine learning team is a self-contained team of 9 people responsible for building models and services that support key workflows for IDfy.
● Our models are gating criteria for these workflows and as such are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
● The team comes from diverse backgrounds and experiences. We have ex-bankers, startup founders, IIT-ians, and more.
● We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
● Be working on all aspects of a production machine learning system. You will be acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
● Work on performance tuning of models
● From time to time work on support and debugging of these production systems
● Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
● Building workflows for training and production systems
● Contribute to documentation
About you
● You are an early-career machine learning engineer (or data scientist). Our ideal candidate is
someone with 1-3 years of experience in data science.
Must Haves
● You have a good understanding of Python and Scikit-learn, Tensorflow, or Pytorch. Our systems are built with these tools/language and we expect a strong base in these.
● You are proficient at exploratory analysis and know which model to use in most scenarios
● You should have worked on framing and solving problems with the application of machine learning or deep learning models.
● You have some experience in building and delivering complete or part AI solutions
● You appreciate that the role of the Machine Learning engineer is not only modeling, but also building product solutions and you strive towards this.
● Enthusiasm and drive to learn and assimilate the state of art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
Good to Have
● Knowledge of and experience in computer vision. While a large part of our work revolves around computer
vision, we believe this is something you can learn on the job.
● We build our own services, hence we would want you to have some knowledge of writing APIs.
● Our stack also includes languages like Ruby, Go, and Elixir. We would love it if you know any of these or take an interest in functional programming.
● Knowledge of and experience in ML Ops and tooling would be a welcome addition. We use Docker and Kubernetes for deploying our services.
About the Company:
This opportunity is for an AI Drone Technology startup funded by the Indian Army. It is working to develop cutting-edge products to help the Indian Army gain an edge in New Age Enemy Warfare.
They are working on using drones to neutralize terrorists hidden in deep forests. Get a chance to contribute to secure our borders against the enemy.
Responsibilities:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures/architectures
- Proficiency in at least one major machine learning framework such as Tensorflow, Pytorch
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Highly skilled in Python scripting language
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
Educational Qualification:
MS in Engineering, Applied Mathematics, Data Science, Computer Science or equivalent field, with 3 years industry experience, a PhD degree or equivalent industry experience.
Artificial Intelligence Intern
at Bytelearn
2. Build large datasets that will be used to train the models
3. Empirically evaluate related research works
4. Train and evaluate deep learning architectures on multiple large scale datasets
5. Collaborate with the rest of the research team to produce high-quality research
Predictive Modelling And Optimization Consultant (SCM)
at BRIDGEi2i Analytics Solutions
The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.
At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.
Candidates will be expected to lead projects across several areas such as
- Demand forecasting
- Inventory management
- Simulation & Mathematical optimization models.
- Procurement analytics
- Distribution/Logistics planning
- Network planning and optimization
Qualification and Experience
- 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
- Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
- Hands-on experience in delivery of projects using statistical modelling
Skills / Knowledge
- Hands on experience in statistical modelling software such as R/ Python and SQL.
- Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
- Highly proficient with Excel, PowerPoint and Word applications.
- APICS-CSCP or PMP certification will be added advantage
- Strong knowledge of supply chain management
- Working knowledge on the linear/nonlinear optimization
- Ability to structure problems through a data driven decision-making process.
- Excellent project management skills, including time and risk management and project structuring.
- Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
- Ability to liaison effectively with multiple stakeholders and functional disciplines.
- Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data