Cutshort logo
SteelEye logo
Sr Data Engineer - (Python, Pandas)
Sr Data Engineer - (Python, Pandas)
SteelEye's logo

Sr Data Engineer - (Python, Pandas)

Arjun Shivraj's profile picture
Posted by Arjun Shivraj
5 - 20 yrs
₹20L - ₹35L / yr (ESOP available)
Bengaluru (Bangalore)
Skills
skill iconPython
ETL
Big Data
skill iconAmazon Web Services (AWS)
pandas

What you’ll do

  • Deliver plugins for our Python-based ETL pipelines.
  • Deliver Python microservices for provisioning and managing cloud infrastructure.
  • Implement algorithms to analyse large data sets.
  • Draft design documents that translate requirements into code.
  • Deal with challenges associated with handling large volumes of data.
  • Assume responsibilities from technical design through technical client support.
  • Manage expectations with internal stakeholders and context-switch in a fast paced environment.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the engineering strategy.
  • Champion best development practices and provide mentorship.

What we’re looking for

  • Experience in Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

About SteelEye Culture

  • Work from home until you are vaccinated against COVID-19
  • Top of the line health insurance • Order discounted meals every day from a dedicated portal
  • Fair and simple salary structure
  • 30+ holidays in a year
  • Fresh fruits every day
  • Centrally located. 5 mins to the nearest metro station (MG Road)
  • Measured on output and not input
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About SteelEye

Founded :
2017
Type
Size
Stage :
Raised funding
About
SteelEye is a fast growing FinTech company based in London and has offices in Bangalore and Paris, that offers a data platform to help financial institutions such as Investment Banks, Hedge Funds, Brokerage Firms, Asset Management Firms to comply with financial regulations in the European Union.

Our clients can aggregate, search, surveillance and report on trade, communications and market data. SteelEye also enables customers to gain powerful insights from their data, helping them to trade with greater efficiency and profitability. The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses.

We are a vibrant, fun and exciting group of people that share a passion for technology and data. If you have what it takes to become a part of the SteelEye family, you have come to the right place. This is where you will find information about our people, culture and our current job opportunities.
Read more
Company video
SteelEye's video section
SteelEye's video section
Connect with the team
Profile picture
akanksha rajput
Profile picture
Arjun Shivraj
Profile picture
Arjun Shivraj
Company social profiles
bloglinkedintwitter

Similar jobs

mazosol
kirthick murali
Posted by kirthick murali
Mumbai
10 - 20 yrs
₹30L - ₹58L / yr
skill iconPython
skill iconR Programming
PySpark
Google Cloud Platform (GCP)
SQL Azure

Data Scientist – Program Embedded 

Job Description:   

We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise). 

What we are looking for: 

  1. Highly efficient in Python/Pyspark/R. 
  2. Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration. 
  3. Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks. 
  4. Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches. 
  5. Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc. 
  6. Familiar with designing testing framework for different modelling practice/projects based on business needs. 
  7. Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done. 
  8. If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails) 
  9. Ideally, 10+ years of experience, and have been on independent business facing roles. 
  10. CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries. 
  11. Being proactive and collaborative would be essential. 

 

Some projects examples within the program: 

  1. Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust. 
  2. Building algorithms for predicting trend in category, consumptions to support dashboards. 
  3. Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions. 
  4. Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience. 


Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Intuitive Technology Partners
shalu Jain
Posted by shalu Jain
Remote only
10 - 20 yrs
₹15L - ₹35L / yr
Internet of Things (IOT)
skill iconAmazon Web Services (AWS)
cloud native
sitewise
AWS Lambda
+5 more

As a Sr. Cloud IoT Engineer with Intuitive, you will be responsible for the data acquisition from devices and sensors that allows the device to connect seamlessly with other systems. You will be researching, creating, testing, and documenting IoT solutions with integrated systems and devices to help the analytics and data science initiatives across our enterprise customers. 

You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud IoT Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud IoT Engineer is the main point of contact for all things related to ingestion of telemetry data and saturation into time series or other databases. In this role, we expect our DataOps leaders to be obsessed with telemetry data and providing insights to help our end customers. 

  KEY REQUIREMENTS: 

  • 10+ years’ experience as data engineer. 
  • Must have 5+ Years in implementing IoT engineering solutions with multiple cloud providers and toolsets. 
  • This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale. 
  • Must have deep understanding of solutions like AWS IoT Greengrass edge runtime and cloud services to build, deploy, and manage device software across the fleet of devices.  
  • Hands on experience with AWS IoT stack like Core, SiteWise, Kinesis, Lamda, Timestream 
  • Performance Tuning of streaming telemetry data based with tools like Grafana, Amazon Cloudwatch and QuickSight informed by business requirements. 
  • Good working experience on Web Service Integration, RESTful APIs, WebSockets and MQTT.
  • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration    

 

 

 

 

Read more
Railofy
at Railofy
1 video
1 recruiter
Manan Jain
Posted by Manan Jain
Mumbai
2 - 5 yrs
₹5L - ₹12L / yr
skill iconData Science
skill iconPython
skill iconR Programming

About Us:

We are a VC-funded startup solving one of the biggest transportation problems India faces. Most passengers in India travel long distance by IRCTC trains. At time of booking, approx 1 out of every 2 passengers end up with a Waitlisted or RAC ticket. This creates a lot of anxiety for passengers, as Railway only announces only 4 hour before departure if they have a confirmed seat. We solve this problem through our Waitlist & RAC Protection. Protection can be bought against each IRCTC ticket at time of booking. If train ticket is not confirmed, we fly the passenger to the destination. Our team consists of 3 Founders from IIT, IIM and ISB.

Functional Experience:

  • Computer Science or IT Engineering background with solid understanding of basics of Data Structures and Algorithms
  • 2+ years of data science experience working with large datasets
  • Expertise in Python packages like pandas, numPy, sklearn, matplotlib, seaborn, keras and tensorflow
  • Expertise in Big Data technologies like Hadoop, Cassandra and PostgreSQL
  • Expertise in Cloud computing on AWS with EC2, AutoML, Lambda and RDS
  • Good knowledge of Machine Learning and Statistical time series analysis (optional)
  • Unparalleled logical ability making you the go to guy for all things related to data
  • You love coding like a hobby and are up for a challenge!

 

Cultural:

  • Assume a strong sense of ownership of analytics : Design, develop & deploy
  • Collaborate with senior management, operations & business team
  • Ensure Quality & sustainability of the architecture
  • Motivation to join an early stage startup should go beyond compensation
Read more
Indium Software
at Indium Software
16 recruiters
Swaathipriya P
Posted by Swaathipriya P
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹1L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more
2+ years of Analytics with predominant experience in SQL, SAS, Statistics, R , Python, Visualization
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Read more
AES Technologies
at AES Technologies
3 recruiters
Ragavendra G
Posted by Ragavendra G
Dubai
2 - 4 yrs
Best in industry
skill iconPython
Windows Azure
skill iconJava
Big Data
skill iconScala

As a Data Engineer, your role will encompass: 

  • Designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture using Scala, Python, Talend etc.   
  • Gather and address technical and design requirements.  
  • Refactor existing applications to optimize its performance through setting the appropriate architecture and integrating the best practices and standards. 
  • Participate in the entire data life-cycle mainly focusing on coding, debugging, and testing. 
  • Troubleshoot and debug ETL Pipelines. 
  • Documentation of each process. 

Technical Requirements: - 

  • BSc degree in Computer Science/Computer Engineering. (Masters is a plus.) 
  • 2+ years of experience as a Data Engineer. 
  • In-depth understanding of core ETL concepts, Data Modelling, Data Lineage, Data Governance, Data Catalog, etc. 
  • 2+ years of work experience in Scala, Python, Java. 
  • Good Knowledge on Big Data Tools such as Spark/HDFS/Hive/Flume, etc. 
  • Hands on experience on ETL tools like Talend/Informatica is a plus. 
  • Good knowledge in Kafka and spark streaming is a big plus. 
  • 2+ years of experience in using Azure cloud and its resources/services (like Azure Data factory, Azure Databricks, SQL Synapse, Azure Devops, Logic Apps, Power Bi, Azure Event Hubs, etc). 
  • Strong experience in Relational Databases (MySQL, SQL Server)  
  • Exposure on data visualization tools like Power BI / Qlik sense / MicroStrategy 
  • 2+ years of experience in developing APIs (REST & SOAP protocols). 
  • Strong knowledge in Continuous Integration & Continuous Deployment (CI/CD) utilizing Docker containers, Jenkins, etc. 
  • Strong competencies in algorithms and software architecture. 
  • Excellent analytical and teamwork skills. 

 Good to have: - 

  • Previous on-prem working experience is a plus. 
  • In-depth understanding of the entire web development process (design, development, and deployment) 
  • Previous experience in automated testing including unit testing & UI testing. 

 

Read more
PAGO Analytics India Pvt Ltd
Vijay Cheripally
Posted by Vijay Cheripally
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
2 - 8 yrs
₹8L - ₹15L / yr
skill iconPython
PySpark
Microsoft Windows Azure
SQL Azure
skill iconData Analytics
+6 more
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Read more
INSOFE
at INSOFE
1 recruiter
Nitika Bist
Posted by Nitika Bist
Hyderabad, Bengaluru (Bangalore)
7 - 10 yrs
₹12L - ₹18L / yr
Big Data
Data engineering
Apache Hive
Apache Spark
Hadoop
+4 more
Roles & Responsibilities:
  • Total Experience of 7-10 years and should be interested in teaching and research
  • 3+ years’ experience in data engineering which includes data ingestion, preparation, provisioning, automated testing, and quality checks.
  • 3+ Hands-on experience in Big Data cloud platforms like AWS and GCP, Data Lakes and Data Warehouses
  • 3+ years of Big Data and Analytics Technologies. Experience in SQL, writing code in spark engine using python, scala or java Language. Experience in Spark, Scala
  • Experience in designing, building, and maintaining ETL systems
  • Experience in data pipeline and workflow management tools like Airflow
  • Application Development background along with knowledge of Analytics libraries, opensource Natural Language Processing, statistical and big data computing libraries
  • Familiarity with Visualization and Reporting Tools like Tableau, Kibana.
  • Should be good at storytelling in Technology
Please note that candidates should be interested in teaching and research work.

Qualification: B.Tech / BE / M.Sc / MBA / B.Sc, Having Certifications in Big Data Technologies and Cloud platforms like AWS, Azure and GCP will be preferred
Primary Skills: Big Data + Python + Spark + Hive + Cloud Computing
Secondary Skills: NoSQL+ SQL + ETL + Scala + Tableau
Selection Process: 1 Hackathon, 1 Technical round and 1 HR round
Benefit: Free of cost training on Data Science from top notch professors
Read more
SpringML
at SpringML
1 video
4 recruiters
Sai Raj Sampath
Posted by Sai Raj Sampath
Remote, Hyderabad
4 - 9 yrs
₹12L - ₹20L / yr
Big Data
Data engineering
TensorFlow
Apache Spark
skill iconJava
+2 more
REQUIRED SKILLS:

• Total of 4+ years of experience in development, architecting/designing and implementing Software solutions for enterprises.

• Must have strong programming experience in either Python or Java/J2EE.

• Minimum of 4+ year’s experience working with various Cloud platforms preferably Google Cloud Platform.

• Experience in Architecting and Designing solutions leveraging Google Cloud products such as Cloud BigQuery, Cloud DataFlow, Cloud Pub/Sub, Cloud BigTable and Tensorflow will be highly preferred.

• Presentation skills with a high degree of comfort speaking with management and developers

• The ability to work in a fast-paced, work environment

• Excellent communication, listening, and influencing skills

RESPONSIBILITIES:

• Lead teams to implement and deliver software solutions for Enterprises by understanding their requirements.

• Communicate efficiently and document the Architectural/Design decisions to customer stakeholders/subject matter experts.

• Opportunity to learn new products quickly and rapidly comprehend new technical areas – technical/functional and apply detailed and critical thinking to customer solutions.

• Implementing and optimizing cloud solutions for customers.

• Migration of Workloads from on-prem/other public clouds to Google Cloud Platform.

• Provide solutions to team members for complex scenarios.

• Promote good design and programming practices with various teams and subject matter experts.

• Ability to work on any product on the Google cloud platform.

• Must be hands-on and be able to write code as required.

• Ability to lead junior engineers and conduct code reviews



QUALIFICATION:

• Minimum B.Tech/B.E Engineering graduate
Read more
LendingKart
at LendingKart
5 recruiters
Mohammed Nayeem
Posted by Mohammed Nayeem
Bengaluru (Bangalore), Ahmedabad
2 - 5 yrs
₹2L - ₹13L / yr
skill iconPython
skill iconData Science
SQL
Roles and Responsibilities:
 Mining large volumes of credit behavior data to generate insights around product holdings and monetization opportunities for cross sell
 Use data science to size opportunity and product potential for launch of any new product/pilots
 Build propensity models using heuristics and campaign performance to maximize efficiency.
 Conduct portfolio analysis and establish key metrics for cross sell partnership

Desired profile/Skills:
 2-5 years of experience with a degree in any quantitative discipline such as Engineering, Computer Science, Economics, Statistics or Mathematics
 Excellent problem solving and comprehensive analytical skills – ability to structure ambiguous problem statements, perform detailed analysis and derive crisp insights.
 Solid experience in using python and SQL
 Prior work experience in a financial services space would be highly valued

Location: Bangalore/ Ahmedabad
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos