Data driven decision-making is core to advertising technology at AdElement. We are looking for sharp, disciplined, and highly quantitative machine learning/ artificial intellignce engineers with big data experience and a passion for digital marketing to help drive informed decision-making. You will work with top-talent and cutting edge technology and have a unique opportunity to turn your insights into products influencing billions. The potential candidate will have an extensive background in distributed training frameworks, will have experience to deploy related machine learning models end to end, and will have some experience in data-driven decision making of machine learning infrastructure enhancement. This is your chance to leave your legacy and be part of a highly successful and growing company.
Required Skills
- 3+ years of industry experience with Java/ Python in a programming intensive role
- 3+ years of experience with one or more of the following machine learning topics: classification, clustering, optimization, recommendation system, graph mining, deep learning
- 3+ years of industry experience with distributed computing frameworks such as Hadoop/Spark, Kubernetes ecosystem, etc
- 3+ years of industry experience with popular deep learning frameworks such as Spark MLlib, Keras, Tensorflow, PyTorch, etc
- 3+ years of industry experience with major cloud computing services
- An effective communicator with the ability to explain technical concepts to a non-technical audience
- (Preferred) Prior experience with ads product development (e.g., DSP/ad-exchange/SSP)
- Able to lead a small team of AI/ML Engineers to achieve business objectives
Responsibilities
- Collaborate across multiple teams - Data Science, Operations & Engineering on unique machine learning system challenges at scale
- Leverage distributed training systems to build scalable machine learning pipelines including ETL, model training and deployments in Real-Time Bidding space.
- Design and implement solutions to optimize distributed training execution in terms of model hyperparameter optimization, model training/inference latency and system-level bottlenecks
- Research state-of-the-art machine learning infrastructures to improve data healthiness, model quality and state management during the lifecycle of ML models refresh.
- Optimize integration between popular machine learning libraries and cloud ML and data processing frameworks.
- Build Deep Learning models and algorithms with optimal parallelism and performance on CPUs/ GPUs.
- Work with top management on defining teams goals and objectives.
Education
- MTech or Ph.D. in Computer Science, Software Engineering, Mathematics or related fields
About AdElement
AdElement is an online advertising startup based in Pune. We do AI driven ad personalization for video and display ads. Audiences are targeted algorithmically across biddable sources of ad inventory through real time bidding. We are looking to grow our teams to meet the rapidly expanding market opportunity.
Similar jobs
Changing the way cataloging is done across the Globe. Our vision is to empower the smallest of
sellers, situated in the farthest of corners, to create superior product images and videos, without the need
for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using
their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly
compose High-Quality "tuned-in" product visuals, instantly.
editing AI software, to capture and process beautiful product images for online selling. We are also
fortunate and proud to be backed by the biggest names in the investment community including the likes of
Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that
there is an intelligent and efficient way of doing Digital Production than how the world operates currently.
Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of ecommerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.
Job Requirements:
- 4-5 years of experience overall.
- Looking for strong Computer Vision experience and knowledge.
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Hi,
We are hiring for Data Scientist for Bangalore.
Req Skills:
- NLP
- ML programming
- Spark
- Model Deployment
- Experience processing unstructured data and building NLP models
- Experience with big data tools pyspark
- Pipeline orchestration using Airflow and model deployment experience is preferred
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet business requirements
- Identifying, designing, and implementing internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Work with Data, Analytics & Tech team to extract, arrange and analyze data
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
- Works closely with all business units and engineering teams to develop a strategy for long-term data platform architecture.
- Working with stakeholders including data, design, product, and executive teams, and assisting them with data-related technical issues
- Working with stakeholders including the Executive, Product, Data, and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
- SQL
- Ruby or Python(Ruby preferred)
- Apache-Hadoop based analytics
- Data warehousing
- Data architecture
- Schema design
- ML
- Prior experience of 2 to 5 years as a Data Engineer.
- Ability in managing and communicating data warehouse plans to internal teams.
- Experience designing, building, and maintaining data processing systems.
- Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions.
- Excellent analytic skills associated with working on unstructured datasets.
- Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata.
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
- Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
- Implement data pipelines, new features, and algorithms that are critical to our production models
- Create scalable strategies to deploy and execute your models
- Write well designed, testable, efficient code
- Identify valuable data sources and automate collection processes.
- Undertake to preprocess of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns.
Requirements:
- 2+ years of experience in applied data science or engineering with a focus on machine learning
- Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor, etc)
- strong quantitative and programming skills with a product-driven sensibility
- Actively engage with internal business teams to understand their challenges and deliver robust, data-driven solutions.
- Work alongside global counterparts to solve data-intensive problems using standard analytical frameworks and tools.
- Be encouraged and expected to innovate and be creative in your data analysis, problem-solving, and presentation of solutions.
- Network and collaborate with a broad range of internal business units to define and deliver joint solutions.
- Work alongside customers to leverage cutting-edge technology (machine learning, streaming analytics, and ‘real’ big data) to creatively solve problems and disrupt existing business models.
In this role, we are looking for:
- A problem-solving mindset with the ability to understand business challenges and how to apply your analytics expertise to solve them.
- The unique person who can present complex mathematical solutions in a simple manner that most will understand, including customers.
- An individual excited by innovation and new technology and eager to finds ways to employ these innovations in practice.
- A team mentality, empowered by the ability to work with a diverse set of individuals.
Basic Qualifications
- A Bachelor’s degree in Data Science, Math, Statistics, Computer Science or related field with an emphasis on analytics.
- 5+ Years professional experience in a data scientist/analyst role or similar.
- Proficiency in your statistics/analytics/visualization tool of choice, but preferably in the Microsoft Azure Suite, including Azure ML Studio and PowerBI as well as R, Python, SQL.
Preferred Qualifications
- Excellent communication, organizational transformation, and leadership skills
- Demonstrated excellence in Data Science, Business Analytics and Engineering
About Toyota Connected
If you want to change the way the world works, transform the automotive industry and positively impact others on a global scale, then Toyota Connected is the right place for you! Within our collaborative, fast-paced environment we focus on continual improvement and work in a highly iterative way to deliver exceptional value in the form of connected products and services that wow and delight our customers and the world around us.
About the Team
Toyota Connected India is hiring talented engineers at Chennai to use Deep Learning, Computer vision, Big data, high performance cloud-based services and other cutting-edge technologies to transform the customer experience with their vehicle. Come help us re-imagine what mobility can be today and for years to come!
Job Description
The Toyota Connected team is looking for a Senior ML Engineer (Computer Vision) to be a part of a highly talented engineering team to help create new products and services from the ground up for the next generation connected vehicles. We are looking for team members that are required to be creative in solving problems, excited to work in new technology areas and be ready to wear multiple hats to get things done in a highly energized, fast-paced, innovative and collaborative startup environment.
What you will do
• Develop solutions using Machine Learning/Deep Learning and other advanced technologies to solve a variety of problems
• Develop image analysis algorithms and deep learning architectures to solve Computer vision related problems.
• Implement cutting edge machine learning techniques in image classification, object detection, semantic segmentation, sequence modeling, etc. using frameworks such as OpenCV, TensorFlow and Pytorch.
• Translate user stories and business requirements to technical solutions by building quick prototypes or proof of concepts with several business and technical stakeholder groups in both internal and external organizations
• Partner with leaders in the area and have insights to select off the shelf components vs building from the scratch
• Convert the proof of concepts to production-grade solutions that can scale for hundreds of thousands of users
• Be hands-on where required and lead from the front in following best practices in development and CI/CD methods
• Own delivery of features from top to bottom, from concept to code to production
• Develop tools and libraries that will enable rapid and scalable development in the future
You are a successful candidate if
• You are smart and can demonstrate it
• You have 8+ years of experience as software engineer with minimum 3 years hands-on experience delivering products or solutions that utilized Computer Vision
• Strong experience in deploying solutions to production, with hands-on experience in any public cloud environment (AWS, GCP or Azure)
• Excellent proficiency in Open CV or related computer vision frameworks and libraries
• Mathematical understanding of a variety of statistical learning algorithms (Reinforcement Learning, Supervised/Unsupervised, Graphical Models)
• Expertise in a variety of Deep Learning architectures including Residual Networks, RNN/CNN, Transformer, and Transfer Learning. And experience in delivering value using these in real production environments for real customers
• You have deep proficiency in Python and at least one other major programming language (C++, Java, Golang)
• You are very fluent in one or more ML tools/libraries like Tensorflow, Pytorch, Caffe, and/or Theano and have solved several real-life problems using these
• We think the knowledge acquired earning a degree in Computer Science or Math would be of great value in this position, but if you're smart and have the experience that backs up your abilities, for us, talent trumps degree every time
What is in it for you?
• Top of the line compensation!
• You'll be treated like the professional we know you are and left to manage your own time and work load.
• Yearly gym membership reimbursement. & Free catered lunches.
• No dress code! We trust you are responsible enough to choose what’s appropriate to wear for the day.
• Opportunity to build products that improves the safety and convenience of millions of customers.
Our Core Values
- Empathetic: We begin making decisions by looking at the world from the perspective of our customers, teammates, and partners.
- Passionate: We are here to build something great, not just for the money. We are always looking to improve the experience of our millions of customers
- Innovative: We experiment with ideas to get to the best solution. Any constraint is a challenge, and we love looking for creative ways to solve them.
- Collaborative: When it comes to people, we think the whole is greater than its parts and that everyone has a role to play in the success!