Cutshort logo
Optisol Business Solutions Pvt Ltd logo
Data Engineer (Senior/Lead)
Optisol Business Solutions Pvt Ltd's logo

Data Engineer (Senior/Lead)

Veeralakshmi K's profile picture
Posted by Veeralakshmi K
4 - 10 yrs
₹10L - ₹15L / yr
Remote, Chennai, Coimbatore, Madurai
Skills
skill iconPython
SQL
Amazon Redshift
Amazon RDS
AWS Simple Notification Service (SNS)
Amazon SQS
ECS
ETL
AWS Simple Queuing Service (SQS)
skill iconDocker

Role Summary


As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.


Requirements:


·      Around 4 years of working experience in data warehousing / BI system.

·      Strong hands-on experience with Snowflake AND strong programming skills in Python

·      Strong hands-on SQL skills

·      Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.

·      Knowledge on debt for cloud databases

·      AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions

·      Solid understanding of ETL processes, and data warehousing concepts

·      Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework

·      Experience with scrum methodologies

·      Infrastructure build tools such as CFT / Terraform is a plus.

·      Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.

·      Strong team player with good communication skills.


Overview Optisol Business Solutions


OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.


Benefits, working with Optisol


·      Great Learning & Development program

·      Flextime, Work-at-Home & Hybrid Options

·      A knowledgeable, high-achieving, experienced & fun team.

·      Spot Awards & Recognition.

·      The chance to be a part of next success story.

·      A competitive base salary.


More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Optisol Business Solutions Pvt Ltd

Founded
Type
Size
Stage
About
Optisol Business is one of the leading e-business, web development company in Chennai, India. We are specializing in PHP, Java, mobile app development, Ruby on Rails
Read more
Connect with the team
Profile picture
sam kirubakar
Profile picture
Krishnaveni Hemanthkumar
Profile picture
Jayakumar Radhakrishnan
Profile picture
Manikandan Obs
Company social profiles
pinterestlinkedintwitterfacebook

Similar jobs

TEKsystems
at TEKsystems
1 recruiter
priyanka kanwar
Posted by priyanka kanwar
Gurugram
5 - 10 yrs
₹15L - ₹25L / yr
Apache Spark
skill iconAmazon Web Services (AWS)
skill iconPython
airflow
Algorithms

TOP 3 SKILLS

Python (Language)

Spark Framework

Spark Streaming

Docker/Jenkins/ Spinakar

AWS

Hive Queries

He/She should be good coder.

Preff: - Airflow

Must have experience: -

Python

Spark framework and streaming

exposure to Machine Learning Lifecycle is mandatory.

Project:

This is searching domain project. Any searching activity which is happening on website this team create the model for the same, they create sorting/scored model for any search. This is done by the data

scientist This team is working more on the streaming side of data, the candidate would work extensively on Spark streaming and there will be a lot of work in Machine Learning.


INTERVIEW INFORMATION

3-4 rounds.

1st round based on data engineering batching experience.

2nd round based on data engineering streaming experience.

3rd round based on ML lifecycle (3rd round can be a techno-functional round based on previous

feedbacks otherwise 4th round will be a functional round if required.

Read more
NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
Entity Framework
+2 more

Job Responsibilities

  • Design machine learning systems
  • Research and implement appropriate ML algorithms and tools
  • Develop machine learning applications according to requirements
  • Select appropriate datasets and data representation methods
  • Run machine learning tests and experiments
  • Perform statistical analysis and fine-tuning using test results
  • Train and retrain systems when necessary

 

Requirements for the Job

 

  1. Bachelor’s/Master's/PhD in Computer Science, Mathematics, Statistics or equivalent field andmust have a minimum of 2 years of overall experience in tier one colleges 
  1. Minimum 1 year of experience working as a Data Scientist in deploying ML at scale in production
  2. Experience in machine learning techniques (e.g. NLP, Computer Vision, BERT, LSTM etc..) andframeworks (e.g. TensorFlow, PyTorch, Scikit-learn, etc.)
  1. Working knowledge in deployment of Python systems (using Flask, Tensorflow Serving)
  2. Previous experience in following areas will be preferred: Natural Language Processing(NLP) - Using LSTM and BERT; chatbots or dialogue systems, machine translation, comprehension of text, text summarization.
  3. Computer Vision - Deep Neural Networks/CNNs for object detection and image classification, transfer learning pipeline and object detection/instance segmentation (Mask R-CNN, Yolo, SSD).
Read more
SmartHub Innovation Pvt Ltd
Sathya Venkatesh
Posted by Sathya Venkatesh
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

JD Code: SHI-LDE-01 

Version#: 1.0 

Date of JD Creation: 27-March-2023 

Position Title: Lead Data Engineer 

Reporting to: Technical Director 

Location: Bangalore Urban, India (on-site) 

 

SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our “Software Defined Edge” products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies. 

 

SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.  

Summary 

This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases. 

We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports. 

Responsibilities 

  • Provide technical & architectural leadership for the next generation of product development. 
  • Innovate, Research & Evaluate new technologies and tools for a quality output. 
  • Architect, Design and Implement ensuring scalability, performance and security. 
  • Code and implement new algorithms to solve complex problems. 
  • Analyze complex data, develop, optimize and transform large data sets both structured and unstructured. 
  • Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as Kubernetes  
  • Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding. 
  • Work in an Agile development environment while continuously evaluating and improvising engineering processes 

Required 

  • At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies. 
  • Should have been feature/component lead for several complex features involving large datasets. 
  • Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl. 
  • Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to have 
  • Excellent analytical and problem-solving skill sets. 
  • Experience in  for high throughput is highly desirable 
  • Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode. 
  • Demonstrated ability to provide technical leadership and mentoring to the team 


Read more
InfoCepts
Lalsaheb Bepari
Posted by Lalsaheb Bepari
Chennai, Pune, Nagpur
7 - 10 yrs
₹5L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Responsibilities:

 

• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing

• Implementing Spark processing based ETL frameworks

• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

• Modifying the Informatica-Teradata & Unix based data pipeline

• Enhancing the Talend-Hive/Spark & Unix based data pipelines

• Develop and Deploy Scala/Python based Spark Jobs for ETL processing

• Strong SQL & DWH concepts.

 

Preferred Background:

 

• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs

• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives

• Understanding of EDW system of business and creating High level design document and low level implementation document

• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document

• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

Read more
Phenom People
at Phenom People
5 recruiters
Srivatsav Chilukoori
Posted by Srivatsav Chilukoori
Hyderabad
3 - 6 yrs
₹10L - ₹18L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
skill iconDeep Learning
+4 more

JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
 

Company Description

Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
 

Job Responsibilities:

  • Design and implement machine learning, information extraction, probabilistic matching algorithms and models
  • Research and develop innovative, scalable and dynamic solutions to hard problems
  • Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
  • Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
  • Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
  • Be a valued contributor in shaping the future of our products and services
  • You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
  • Be part of a fast pace, fun-focused, agile team

Job Requirement:

  • 4+ years of industry experience
  • Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
  • Strong mathematics, statistics, and data analytics
  • Solid coding and engineering skills preferably in Machine Learning (not mandatory)
  • Proficient in Java, Python, and Scala
  • Industry experience building and productionizing end-to-end systems
  • Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
  • Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.


Position Summary

We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.

  • Capable of building accurate machine learning models is the main goal of a machine learning engineer
  • Linear Algebra, Applied Statistics and Probability
  • Building Data Models
  • Strong knowledge of NLP
  • Good understanding of multithreaded and object-oriented software development
  • Mathematics, Mathematics and Mathematics
  • Collaborate with Data Engineers to prepare data models required for machine learning models
  • Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
  • Build large-scale software systems and numerical computation topics
  • Use predictive analytics and data mining to solve complex problems and drive business decisions
  • Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
  • Tackle situations where problem is unknown and the Solution is unknown
  • Solve analytical problems, and effectively communicate methodologies and results to the customers
  • Adept at translating business needs into technical requirements and translating data into actionable insights
  • Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.

Benefits

  • Competitive salary for a startup
  • Gain experience rapidly
  • Work directly with the executive team
  • Fast-paced work environment

 

About Phenom People

At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.

 

 Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.

 We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.

 Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.

 We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.

 Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.

 

Kindly explore about the company phenom (https://www.phenom.com/">https://www.phenom.com/)
Youtube - https://www.youtube.com/c/PhenomPeople">https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/">https://www.linkedin.com/company/phenompeople/

https://www.phenom.com/">Phenom | Talent Experience Management

Read more
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹20L / yr
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Azure
  • Deploy company Application on customer public cloud and on-premise data centers
  • Building Kubernetes based workflows for wide variety of use cases
  • Document and Automate the deployment process for internal and external deployments
  • Interacting with customers over call to deployment and debugging
  • Deployment and Product Support

 

Desired Skills and Experience

 

  • 4-6 years of experience in infrastructure development, or development and operations.
  • Minimum 2+ years of experience in docker and kubernetes.
  • Experience working with Docker and Kubernetes. Aware of Kubernetes Internals, Networking etc. Experience with Linux infrastructures tools.
  • Good interpersonal skills and communication with all levels of management.
  • Extensive experience in setting up Kubernetes on AWS, Azure etc.

 

Good to Have

  • Familiarity with Big Data Tools like Hadoop, Spark.
  • Experience with Java Application Debugging.
  • Experience in monitoring tools like Prometheus, Grafana etc

 

Read more
UAE Client
Remote only
5 - 10 yrs
₹10L - ₹18L / yr
Informatica
Informatica PowerCenter
SQL
PL/SQL

Informatica PowerCenter (9x ,10.2) : Minimum 2+ years experience

SQL / PLSQL: Understanding of SQL procedure. Able to convert procedures into Informatica mapping.

Good to have- Advantage if you have knowledge of Windows Batch Script.

Read more
High-Growth Fintech Startup
Agency job
via Unnati by Ramya Senthilnathan
Remote, Mumbai
3 - 5 yrs
₹7L - ₹10L / yr
Business Intelligence (BI)
PowerBI
Analytics
Reporting
Data management
+5 more
Want to join the trailblazing Fintech company which is leveraging software and technology to change the face of short-term financing in India!

Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers. Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
 
As a Data Analyst - SQL, you will be working on projects in the Analytics function to generate insights for business as well as manage reporting for the management for all things related to Lending.
 
You will be part of a rapidly growing tech-driven organization and will be responsible for generating insights that will drive business impact and productivity improvements.
 
What you will do:
  • Ensuring ease of data availability, with relevant dimensions, using Business Intelligence tools.
  • Providing strong reporting and analytical information support to the management team.
  • Transforming raw data into essential metrics basis needs of relevant stakeholders.
  • Performing data analysis for generating reports on a periodic basis.
  • Converting essential data into easy to reference visuals using Data Visualization tools (PowerBI, Metabase).
  • Providing recommendations to update current MIS to improve reporting efficiency and consistency.
  • Bringing fresh ideas to the table and keen observers of trends in the analytics and financial services industry.

 

 

What you need to have:
  • MBA/ BE/ Graduate, with work experience of 3+ years.
  • B.Tech /B.E.; MBA / PGDM
  • Experience in Reporting, Data Management (SQL, MongoDB), Visualization (PowerBI, Metabase, Data studio)
  • Work experience (into financial services, Indian Banks/ NBFCs in-house analytics units or Fintech/ analytics start-ups would be a plus.)
Skills:
  • Skilled at writing & optimizing large complicated SQL queries & MongoDB scripts.
  • Strong knowledge of Banking/ Financial Services domain
  • Experience with some of the modern relational databases
  • Ability to work on multiple projects of different nature and self- driven,
  • Liaise with cross-functional teams to resolve data issues and build strong reports

 

Read more
MNC
Bengaluru (Bangalore)
3 - 9 yrs
₹3L - ₹17L / yr
skill iconScala
Spark
Data Warehouse (DWH)
Business Intelligence (BI)
Apache Spark
+2 more
Dear All,
we are looking for candidates who have good experiance with
BI/DW Experience of 3 - 6 years with Spark, Scala, SQL expertise
and Azure.
Azure background is needed.
     * Spark hands on : Must have
     * Scala hands on : Must have
     * SQL expertise : Expert
     * Azure background : Must have
     * Python hands on : Good to have
     * ADF, Data Bricks: Good to have
     * Should be able to communicate effectively and deliver technology
implementation end to end
Looking for candidates who can join 15 to 30 Days and who will avaailable immeiate.


Regards
Gayatri P
Fragma Data Systems
Read more
SpotDraft
at SpotDraft
4 recruiters
Madhav Bhagat
Posted by Madhav Bhagat
Noida, NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹3L - ₹24L / yr
skill iconPython
TensorFlow
caffee
We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos