Cutshort logo
Prescience Decision Solutions's logo

Data Engineer

Shivakumar K's profile picture
Posted by Shivakumar K
3 - 7 yrs
₹10L - ₹20L / yr
Bengaluru (Bangalore)
Skills
Big Data
ETL
Spark
Apache Kafka
Apache Spark
Python
SQL
Java
Databricks

The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes

Required Experience, Skills and Qualifications:

  • Hands on experience on Big Data tools/technologies like Spark,  Databricks, Map Reduce, Hive, HDFS.
  • Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
  • Proficiency in any of the programming language: Python/ Scala/  Java with 4+ years’ experience
  • Experience in Cloud infrastructures like MS Azure, Data lake etc
  • Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Prescience Decision Solutions

Founded :
2017
Type
Size :
20-100
Stage :
Profitable
About
Prescience Decision Solutions helps you make sense of your data through Data Analytics, Artificial Intelligence and Machine Learning. We bring our 'Business Backward' approach to understand your business problems and work with you to solve them.
Read more
Connect with the team
Profile picture
Shivakumar K
Company social profiles
bloglinkedintwitter

Similar jobs

Quinnox
at Quinnox
2 recruiters
MidhunKumar T
Posted by MidhunKumar T
Bengaluru (Bangalore), Mumbai
10 - 15 yrs
₹30L - ₹35L / yr
ADF
azure data lake services
SQL Azure
azure synapse
Spark
+4 more

Mandatory Skills: Azure Data Lake Storage, Azure SQL databases, Azure Synapse, Data Bricks (Pyspark/Spark), Python, SQL, Azure Data Factory.


Good to have: Power BI, Azure IAAS services, Azure Devops, Microsoft Fabric


Ø Very strong understanding on ETL and ELT

Ø Very strong understanding on Lakehouse architecture.

Ø Very strong knowledge in Pyspark and Spark architecture.

Ø Good knowledge in Azure data lake architecture and access controls

Ø Good knowledge in Microsoft Fabric architecture

Ø Good knowledge in Azure SQL databases

Ø Good knowledge in T-SQL

Ø Good knowledge in CI /CD process using Azure devops

Ø Power BI

Read more
Leading Sales Enabler
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹40L / yr
ETL
Spark
Python
Amazon Redshift
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 7 yrs
₹10L - ₹20L / yr
PowerBI
Data modeling
SQL
SSIS
SSAS
  • Good experience on Power BI Visualizations, DAX queries in Power BI
  • Experience in implementing Row Level Security
  • Can understand data models, can implement simple-medium data models
  • Quick learner to pick up the Application data design and processe
  • Expert in SQL, Analyze current ETL/SSIS process
  • Hands on experience in data modeling  
  • Quick learner to pick up the Application data design and processes
  • Data warehouse development and work with SSIS & SSAS (Good to have)
•Can lead a  team of 2-3 developers and own deliverables
Read more
Hyderabad
5 - 10 yrs
₹19L - ₹25L / yr
ETL
Informatica
Data Warehouse (DWH)
Windows Azure
Microsoft Windows Azure
+4 more

A Business Transformation Organization that partners with businesses to co–create customer-centric hyper-personalized solutions to achieve exponential growth. Invente offers platforms and services that enable businesses to provide human-free customer experience, Business Process Automation.


Location: Hyderabad (WFO)

Budget: Open

Position: Azure Data Engineer

Experience: 5+ years of commercial experience


Responsibilities

●     Design and implement Azure data solutions using ADLS Gen 2.0, Azure Data Factory, Synapse, Databricks, SQL, and Power BI

●     Build and maintain data pipelines and ETL processes to ensure efficient data ingestion and processing

●     Develop and manage data warehouses and data lakes

●     Ensure data quality, integrity, and security

●     Implement from existing use cases required by the AI and analytics teams.

●     Collaborate with other teams to integrate data solutions with other systems and applications

●     Stay up-to-date with emerging data technologies and recommend new solutions to improve our data infrastructure


Read more
Phenom People
at Phenom People
5 recruiters
Srivatsav Chilukoori
Posted by Srivatsav Chilukoori
Hyderabad
3 - 6 yrs
₹10L - ₹18L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Python
Deep Learning
+4 more

JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
 

Company Description

Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
 

Job Responsibilities:

  • Design and implement machine learning, information extraction, probabilistic matching algorithms and models
  • Research and develop innovative, scalable and dynamic solutions to hard problems
  • Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
  • Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
  • Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
  • Be a valued contributor in shaping the future of our products and services
  • You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
  • Be part of a fast pace, fun-focused, agile team

Job Requirement:

  • 4+ years of industry experience
  • Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
  • Strong mathematics, statistics, and data analytics
  • Solid coding and engineering skills preferably in Machine Learning (not mandatory)
  • Proficient in Java, Python, and Scala
  • Industry experience building and productionizing end-to-end systems
  • Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
  • Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.


Position Summary

We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.

  • Capable of building accurate machine learning models is the main goal of a machine learning engineer
  • Linear Algebra, Applied Statistics and Probability
  • Building Data Models
  • Strong knowledge of NLP
  • Good understanding of multithreaded and object-oriented software development
  • Mathematics, Mathematics and Mathematics
  • Collaborate with Data Engineers to prepare data models required for machine learning models
  • Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
  • Build large-scale software systems and numerical computation topics
  • Use predictive analytics and data mining to solve complex problems and drive business decisions
  • Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
  • Tackle situations where problem is unknown and the Solution is unknown
  • Solve analytical problems, and effectively communicate methodologies and results to the customers
  • Adept at translating business needs into technical requirements and translating data into actionable insights
  • Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.

Benefits

  • Competitive salary for a startup
  • Gain experience rapidly
  • Work directly with the executive team
  • Fast-paced work environment

 

About Phenom People

At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.

 

 Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.

 We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.

 Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.

 We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.

 Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.

 

Kindly explore about the company phenom (https://www.phenom.com/">https://www.phenom.com/)
Youtube - https://www.youtube.com/c/PhenomPeople">https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/">https://www.linkedin.com/company/phenompeople/

https://www.phenom.com/">Phenom | Talent Experience Management

Read more
Cloudbloom Systems LLP
at Cloudbloom Systems LLP
5 recruiters
Sahil Rana
Posted by Sahil Rana
Bengaluru (Bangalore)
6 - 10 yrs
₹13L - ₹25L / yr
Machine Learning (ML)
Python
Data Science
Natural Language Processing (NLP)
Computer Vision
+3 more
Description
Duties and Responsibilities:
 Research and Develop Innovative Use Cases, Solutions and Quantitative Models
 Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
 Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
 Provide R&D support to productize research prototypes.
 Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
 Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
 Team Management Skills
Academic Qualification
 7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
 Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
 Innovative and creative in data analysis, problem solving and presentation of solutions.
 Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
 Strong experience in handling multi-national client engagements
 Good verbal, writing & presentation skills
Core Expertise
 Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
 Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like  SQL
 Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests. 
 Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques. 
 Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
 Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
 Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
 Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
 Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
 Capability for continuous learning and knowledge acquisition.
 Mentor colleagues for growth and success
 Strong Software Engineering Background
 Hands-on experience with data science tools
Read more
Simplifai Cognitive Solutions Pvt Ltd
Priyanka Malani
Posted by Priyanka Malani
Pune
2 - 15 yrs
₹10L - ₹30L / yr
Spark
Big Data
Apache Spark
Python
PySpark
+1 more

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Read more
MNC
Bengaluru (Bangalore)
3 - 9 yrs
₹3L - ₹17L / yr
Scala
Spark
Data Warehouse (DWH)
Business Intelligence (BI)
Apache Spark
+2 more
Dear All,
we are looking for candidates who have good experiance with
BI/DW Experience of 3 - 6 years with Spark, Scala, SQL expertise
and Azure.
Azure background is needed.
     * Spark hands on : Must have
     * Scala hands on : Must have
     * SQL expertise : Expert
     * Azure background : Must have
     * Python hands on : Good to have
     * ADF, Data Bricks: Good to have
     * Should be able to communicate effectively and deliver technology
implementation end to end
Looking for candidates who can join 15 to 30 Days and who will avaailable immeiate.


Regards
Gayatri P
Fragma Data Systems
Read more
Aptus Data LAbs
at Aptus Data LAbs
1 recruiter
Merlin Metilda
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
Datalicious Pty Ltd
at Datalicious Pty Ltd
2 recruiters
Ramjee Ganti
Posted by Ramjee Ganti
Bengaluru (Bangalore)
2 - 7 yrs
₹7L - ₹20L / yr
Python
Amazon Web Services (AWS)
Google Cloud Storage
Big Data
Data Analytics
+3 more
DESCRIPTION :- We- re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level.- This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms.- You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions.REQUIREMENTS :- Strong background working on large scale Data Warehousing and Data processing solutions.- Strong Python and Spark programming experience.- Strong experience in building big data pipelines.- Very strong SQL skills are an absolute must.- Good knowledge of OO, functional and procedural programming paradigms.- Strong understanding of various design patterns.- Strong understanding of data structures and algorithms.- Strong experience with Linux operating systems.- At least 2+ years of experience working as a software developer or a data-driven environment.- Experience working in an agile environment.Lots of passion, motivation and drive to succeed!Highly desirable :- Understanding of agile principles specifically scrum.- Exposure to Google cloud platform services such as BigQuery, compute engine etc.- Docker, Puppet, Ansible, etc..- Understanding of digital marketing and digital advertising space would be advantageous.BENEFITS :Datalicious is a global data technology company that helps marketers improve customer journeys through the implementation of smart data-driven marketing strategies. Our team of marketing data specialists offer a wide range of skills suitable for any challenge and cover everything from web analytics to data engineering, data science and software development.Experience : Join us at any level and we promise you'll feel up-levelled in no time, thanks to the fast-paced, transparent and aggressive growth of DataliciousExposure : Work with ONLY the best clients in the Australian and SEA markets, every problem you solve would directly impact millions of real people at a large scale across industriesWork Culture : Voted as the Top 10 Tech Companies in Australia. Never a boring day at work, and we walk the talk. The CEO organises nerf-gun bouts in the middle of a hectic day.Money: We'd love to have a long term relationship because long term benefits are exponential. We encourage people to get technical certifications via online courses or digital schools.So if you are looking for the chance to work for an innovative, fast growing business that will give you exposure across a diverse range of the world's best clients, products and industry leading technologies, then Datalicious is the company for you!
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos