Job Details:-
Designation - Data Scientist
Urgently required. (NP of maximum 15 days)
Location:- Mumbai
Experience:- 5-7 years.
Package Offered:- Rs.5,00,000/- to Rs.9,00,000/- pa.
Data Scientist
Job Description:-
Responsibilities:
- Identify valuable data sources and automate collection processes
- Undertake preprocessing of structured and unstructured data
- Analyze large amounts of information to discover trends and patterns
- Build predictive models and machine-learning algorithms
- Combine models through ensemble modeling
- Present information using data visualization techniques
- Propose solutions and strategies to business challenges
- Collaborate with engineering and product development teams
Requirements:
- Proven experience as a Data Scientist or Data Analyst
- Experience in data mining
- Understanding of machine-learning and operations research
- Knowledge of R, SQL and Python; familiarity with Scala, Java is an asset
- Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
- Analytical mind and business acumen
- Strong math skills (e.g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills
- BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Similar jobs
Senior Executive - Analytics
Overview of job :-
Our Client is the world’s largest media investment company which is a part of WPP. They are a global digital transformation agency with 1200 employees across 21 nations. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channels.
We are currently looking for a Sr Executive – Analytics to join us. In this role, you will be responsible for a massive opportunity to build and be a part of largest performance marketing setup APAC is committed to fostering a culture of diversity and inclusion. Our people are our strength so we respect and nurture their individual talent and potential.
Reporting of the role - This role reports to the Director - Analytics,
3 best things about the job:
1. Responsible for data & analytics projects and developing data strategies by diving into data and extrapolating insights and providing guidance to clients
2. Build and be a part of a dynamic team
3. Being part of a global organisations with rapid growth opportunities
Responsibilities of the role:
Build Marketing-Mix and Multi-Touch attribution models using a range of tools, including free and paid.
Work with large data sets via hands-on data processing to produce structured data sets for analysis.
Design and build Visualization, Dashboard and reports for both Internal and external clients using Tableau, Power BI, Datorama or R Shiny/Python.
What you will need:
Degree in Mathematics, Statistics, Economics, Engineering, Data Science, Computer Science or quantitative field.
2-3 years’ experience in Marketing/Data Analytics or related field with hands-on experience in building Marketing-Mix and Attribution models. Proficiency in one or more coding languages – preferred languages: Python, R
Proficiency in one or more Visualization Tools – Tableau, Datorama, Power BI
Proficiency in using SQL.
Proficiency with one or more statistical tools is a plus – Example: SPSS, SAS, MATLAB, Mathcad.
Working experience using big data technologies (Hive/Hadoop) is a plus
As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.
Basic Qualifications
• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
Location: Chennai
Education: BE/BTech
Experience: Minimum 3+ years of experience as a Data Scientist/Data Engineer
Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI
Requirements:
- To be part of Digital Manufacturing and Industrie 4.0 projects across client group of companies
- Design and develop AI//ML models to be deployed across factories
- Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
- Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
- Prior experience in developing AI and ML models is required
- Experience with data from the Manufacturing Industry would be a plus
Roles and Responsibilities:
- Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
- Multitasking, good communication necessary
- Entrepreneurial attitude
Additional Information:
- Travel: Must be willing to travel on shorter duration within India and abroad
- Job Location: Chennai
- Reporting to: Team Leader, Energy Management System
Job brief
We are looking for a Lead Data Scientist to lead a technical team and help us gain
useful insight out of raw data.
Lead Data Scientist responsibilities include managing the data science team, planning
projects and building analytics models. You should have a strong problem-solving ability
and a knack for statistical analysis. If you’re also able to align our data products with our
business goals, we’d like to meet you.
Your ultimate goal will be to help improve our products and business decisions by
making the most out of our data.
Responsibilities
● Conceive, plan and prioritize data projects
● Ensure data quality and integrity
● Interpret and analyze data problems
● Build analytic systems and predictive models
● Align data projects with organizational goals
● Lead data mining and collection procedures
● Test performance of data-driven products
● Visualize data and create reports
● Build and manage a team of data scientists and data engineers
Requirements
● Proven experience as a Data Scientist or similar role
● Solid understanding of machine learning
● Knowledge of data management and visualization techniques
● A knack for statistical analysis and predictive modeling
● Good knowledge of R, Python and MATLAB
● Experience with SQL and NoSQL databases
● Strong organizational and leadership skills
● Excellent communication skills
● A business mindset
● Degree in Computer Science, Data Science, Mathematics or similar field
● Familiar with emerging/cutting edge, open source, data science/machine learning
libraries/big data platforms
Job Title: Senior Data Engineer
Experience: 8Yrs to 11Yrs
Location: Remote
Notice: Immediate or Max 1Month
Role: Permanent Role
Skill set: Google Cloud Platform, Big Query, Java, Python Programming Language, Airflow, Data flow, Apache Beam.
Experience required:
5 years of experience in software design and development with 4 years of experience in the data engineering field is preferred.
2 years of Hands-on experience in GCP cloud data implementation suites such as Big Query, Pub Sub, Data Flow/Apache Beam, Airflow/Composer, Cloud Storage, etc.
Strong experience and understanding of very large-scale data architecture, solutions, and operationalization of data warehouses, data lakes, and analytics platforms.
Mandatory 1 year of software development skills using Java or Python.
Extensive hands-on experience working with data using SQL and Python.
Must Have: GCP, Big Query, Airflow, Data flow, Python, Java.
GCP knowledge must
Java as programming language(preferred)
Big Query, Pub-Sub, Data Flow/Apache Beam, Airflow/Composer, Cloud Storage,
Python
Communication should be good.
You will:
- Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
- Design and develop Big Data pipelines handling huge geospatial data.
- Bring clarity to large complex technical challenges.
- Collaborate with Engineering leadership to help drive technical strategy.
- Project scoping, planning and estimation.
- Mentor and coach team members at different levels of experience.
- Participate in peer code reviews and technical meetings.
- Cultivate a culture of engineering excellence.
- Seek, implement and adhere to standards, frameworks and best practices in the industry.
- Participate in on-call rotation.
You have:
- Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
- 5+ years of experience in software design, architecture and development.
- 5+ years of experience using object-oriented languages (Java, Python).
- Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
- Strong experience in working with different AWS technologies.
- Excellent competencies in data structures & algorithms.
Nice to have:
- Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
- Experience in developing high throughput low latency backend services
- Affinity to spatial data structures and algorithms.
- Familiarity with Postgres DB, Google Places or Mapbox APIs
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Unlimited Paid Time Off
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- 401(k) employer match
- Health coverage including medical, dental, vision and option for HSA or FSA
- Generous parental leave
- Company-wide DEIB Committee
- Inclusion Academy Seminars
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Company-wide Volunteer Day
- Education reimbursement program
- Cell phone reimbursement
- Equity Analysis to ensure fair pay
Experience Range |
2 Years - 10 Years |
Function | Information Technology |
Desired Skills |
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
|
Education Type | Engineering |
Degree / Diploma | Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering |
Specialization / Subject | Any Specialisation |
Job Type | Full Time |
Job ID | 000018 |
Department | Software Development |
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- 4+ years of industrial experience in computer vision and/or deep learning
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Job description
Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)
Primary Location : India-Pune, Hyderabad
Experience : 7 - 12 Years
Management Level: 7
Joining Time: Immediate Joiners are preferred
- Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
- Align architecture with business requirements and stabilizing the developed solution
- Ability to build prototypes to demonstrate the technical feasibility of your vision
- Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
- To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
- Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
- Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
- Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
- Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
- Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
- Deployment sophisticated analytics program of code using any of cloud application.
Perks and Benefits we Provide!
- Working with Highly Technical and Passionate, mission-driven people
- Subsidized Meals & Snacks
- Flexible Schedule
- Approachable leadership
- Access to various learning tools and programs
- Pet Friendly
- Certification Reimbursement Policy
- Check out more about us on our website below!
www.datametica.com
Role and Responsibilities
- Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
- Build robust RESTful APIs that serve data and insights to DataWeave and other products
- Design user interaction workflows on our products and integrating them with data APIs
- Help stabilize and scale our existing systems. Help design the next generation systems.
- Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
- Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
- Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.
Skills and Requirements
- 8- 15 years of experience building and scaling APIs and web applications.
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
- Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
- Working knowledge linux server administration as well as the AWS ecosystem is desirable.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.