Cutshort logo

11+ MLS Jobs in India

Apply to 11+ MLS Jobs on CutShort.io. Find your next job, effortlessly. Browse MLS Jobs and apply today!

icon
PAGO Analytics India Pvt Ltd
Vijay Cheripally
Posted by Vijay Cheripally
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
2 - 8 yrs
₹8L - ₹15L / yr
skill iconPython
PySpark
Microsoft Windows Azure
SQL Azure
skill iconData Analytics
+6 more
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Read more
OJCommerce

at OJCommerce

3 recruiters
Rajalakshmi N
Posted by Rajalakshmi N
Chennai
2 - 5 yrs
₹7L - ₹12L / yr
Beautiful Soup
Web Scraping
skill iconPython
Selenium

Role : Web Scraping Engineer

Experience : 2 to 3 Years

Job Location : Chennai

About OJ Commerce: 


OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention.

Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States.

As we continue to forge new partner relationships, our flagship website, www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually.

Job Summary:

We are seeking a Web Scraping Engineer and Data Extraction Specialist who will play a crucial role in our data acquisition and management processes. The ideal candidate will be proficient in developing and maintaining efficient web crawlers capable of extracting data from large websites and storing it in a database. Strong expertise in Python, web crawling, and data extraction, along with familiarity with popular crawling tools and modules, is essential. Additionally, the candidate should demonstrate the ability to effectively utilize API tools for testing and retrieving data from various sources. Join our team and contribute to our data-driven success!


Responsibilities:


  • Develop and maintain web crawlers in Python.
  • Crawl large websites and extract data.
  • Store data in a database.
  • Analyze and report on data.
  • Work with other engineers to develop and improve our web crawling infrastructure.
  • Stay up to date on the latest crawling tools and techniques.



Required Skills and Qualifications:


  • Bachelor's degree in computer science or a related field.
  • 2-3 years of experience with Python and web crawling.
  • Familiarity with tools / modules such as
  • Scrapy, Selenium, Requests, Beautiful Soup etc.
  • API tools such as Postman or equivalent. 
  • Working knowledge of SQL.
  • Experience with web crawling and data extraction.
  • Strong problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Excellent communication and documentation skills.


What we Offer

• Competitive salary

• Medical Benefits/Accident Cover

• Flexi Office Working Hours

• Fast paced start up

Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Kaleidofin

at Kaleidofin

3 recruiters
Poornima B
Posted by Poornima B
Chennai, Bengaluru (Bangalore)
2 - 4 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconPython
SQL
Customer Acquisition
Big Data
+2 more
Responsibility
  • Partnering with internal business owners (product, marketing, edit, etc.) to understand needs and develop custom analysis to optimize for user engagement and retention
  • Good understanding of the underlying business and workings of cross functional teams for successful execution
  • Design and develop analyses based on business requirement needs and challenges.
  • Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, etc.
  • Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required.
  • Identify and use appropriate investigative and analytical technologies to interpret and verify results.
  • Apply and learn a wide variety of tools and languages to achieve results
  • Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs.

Requirements
  • 2 - 4 years  of relevant experience in Data science.
  • Preferred education: Bachelor's degree in a technical field or equivalent experience.
  • Experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms.
  • Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, XGBoost, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, boosting and ensemble methods.
  • Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools.
  • Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis.
  • Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts.
  • Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE).
  • Experience in risk and credit score domains preferred.
Read more
Credit Saison Finance Pvt Ltd
Najma Khanum
Posted by Najma Khanum
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹30L / yr
skill iconData Science
skill iconR Programming
skill iconPython
Role & Responsibilities:
1) Understand the business objectives, formulate hypotheses and collect the relevant data using SQL/R/Python. Analyse bureau, customer and lending performance data on a periodic basis to generate insights. Present complex information and data in an uncomplicated, easyto-understand way to drive action.
2) Independently Build and refit robust models for achieving game-changing growth while managing risk.
3) Identify and implement new analytical/modelling techniques to improve model performance across customer lifecycle (acquisitions, management, fraud, collections, etc.
4) Help define the data infrastructure strategy for Indian subsidiary.
a. Monitor data quality and quantity.
b. Define a strategy for acquisition, storage, retention, and retrieval of data elements. e.g.: Identify new data types and collaborate with technology teams to capture them.
c. Build a culture of strong automation and monitoring
d. Staying connected to the Analytics industry trends - data, techniques, technology, etc. and leveraging them to continuously evolve data science standards at Credit Saison.

Required Skills & Qualifications:
1) 3+ years working in data science domains with experience in building risk models. Fintech/Financial analysis experience is required.
2) Expert level proficiency in Analytical tools and languages such as SQL, Python, R/SAS, VBA etc.
3) Experience with building models using common modelling techniques (Logistic and linear regressions, decision trees, etc.)
4) Strong familiarity with Tableau//Power BI/Qlik Sense or other data visualization tools
5) Tier 1 college graduate (IIT/IIM/NIT/BITs preferred).
6) Demonstrated autonomy, thought leadership, and learning agility.
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
skill iconJava
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
GitHub

at GitHub

4 recruiters
Nataliia Mediana
Posted by Nataliia Mediana
Remote only
3 - 8 yrs
$24K - $60K / yr
ETL
PySpark
Data engineering
Data engineer
athena
+9 more
We are a nascent quant hedge fund; we need to stage financial data and make it easy to run and re-run various preprocessing and ML jobs on the data.
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena

To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.

If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.

Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments

Requirements
- Previous experience as a data engineer with the above technologies
Read more
Rivet Systems Pvt Ltd.

at Rivet Systems Pvt Ltd.

1 recruiter
Shobha B K
Posted by Shobha B K
Bengaluru (Bangalore)
5 - 19 yrs
₹10L - ₹30L / yr
ETL
Hadoop
Big Data
Pig
Spark
+2 more
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig

To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Read more
Bengaluru (Bangalore)
8 - 15 yrs
₹15L - ₹30L / yr
Technical Architecture
Big Data
IT Solutioning
skill iconPython
Rest API

Role and Responsibilities

  • Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
  • Build robust RESTful APIs that serve data and insights to DataWeave and other products
  • Design user interaction workflows on our products and integrating them with data APIs
  • Help stabilize and scale our existing systems. Help design the next generation systems.
  • Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
  • Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
  • Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
  • Constantly think scale, think automation. Measure everything. Optimize proactively.
  • Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.

 

Skills and Requirements

  • 8- 15 years of experience building and scaling APIs and web applications.
  • Experience building and managing large scale data/analytics systems.
  • Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
  • Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
  • Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
  • Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
  • Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
  • Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
  • Use the command line like a pro. Be proficient in Git and other essential software development tools.
  • Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
  • Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
  • Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
  • Working knowledge linux server administration as well as the AWS ecosystem is desirable.
  • It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Read more
LatentView Analytics

at LatentView Analytics

3 recruiters
talent acquisition
Posted by talent acquisition
Chennai
3 - 5 yrs
₹0L / yr
Business Analysis
Analytics
skill iconPython
Looking for Immediate JoinersAt LatentView, we would expect you to:- Independently handle delivery of analytics assignments- Mentor a team of 3 - 10 people and deliver to exceed client expectations- Co-ordinate with onsite LatentView consultants to ensure high quality, on-time delivery- Take responsibility for technical skill-building within the organization (training, process definition, research of new tools and techniques etc.)You'll be a valuable addition to our team if you have:- 3 - 5 years of hands-on experience in delivering analytics solutions- Great analytical skills, detail-oriented approach- Strong experience in R, SAS, Python, SQL, SPSS, Statistica, MATLAB or such analytic tools would be preferable- Working knowledge in MS Excel, Power Point and data visualization tools like Tableau, etc- Ability to adapt and thrive in the fast-paced environment that young companies operate in- A background in Statistics / Econometrics / Applied Math / Operations Research / MBA, or alternatively an engineering degree from a premier institution.
Read more
Jiva adventures

at Jiva adventures

2 recruiters
Bharat Chinhara
Posted by Bharat Chinhara
Bengaluru (Bangalore)
1 - 4 yrs
₹5L - ₹15L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Should be experienced in building Machine learning pipelines.   Should be proficient in Python and scientific packages like pandas, numpy, scikit, matplotlib, etc. Experience with techniques such as Data mining, Distributed Computing, Applied Mathematics and Algorthims, Probablity & statistics, Strong problem solving and conceptual thinking abilities Hands on experience in Model building Building highly customized and optimized data pipelines integrating third party API’s and inhouse data sources. Extracting features from text data using tools like Scapy Deep learning for NLP using any modern framework
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort