Cutshort logo

Data Engineer

at Futurense Technologies

DP
Posted by Rajendra Dasigari
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹6L - ₹12L / yr
icon
Full time
Skills
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
Python
SQL
Amazon Web Services (AWS)
Snow flake schema
SSIS
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more

About Futurense Technologies

Founded
2020
Type
Size
20-100
Stage
Bootstrapped
About
Building futuristic tech solutions and nurturing future-ready careers, Futurense is one of the best global IT consulting and software development firms. Know more.
Read more
Connect with the team
icon
Rajendra Dasigari
Company social profiles
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

at DeepIntent
2 candid answers
17 recruiters
DP
Posted by Indrajeet Deshmukh
Pune
1 - 3 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Senior Software Engineer - Data 

 

Job Description:

We are looking for a tech savvy Data Engineer to join our growing data team. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The hire must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. 

 

Data Engineer Job Responsibilities:

  • Develop and maintain scalable data pipelines and build out new API integrations to support continuing increases in data volume and complexity.
  • Implement processes and systems to monitor data accuracy, ensuring 100% data availability for key stakeholders and business processes that depend on it.
  • Write unit/integration tests and document work.
  • Perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Design data integrations and reporting framework.
  • Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
  • Design and evaluate open source and vendor tools for data lineage.
  • Work closely with all business units and engineering teams to develop strategy for long term data platform architecture.

 

Data Engineer Qualifications / Skills:

  • 3+ years of Java development experience
  • Experience with or knowledge of Agile Software Development methodologies
  • Excellent problem solving and troubleshooting skills
  • Process oriented with great documentation skills
  • Experience with big data technologies like Kafka, BigQuery, etc
  • Experience with AWS cloud services: EC2, RDS, etc
  • Experience with message queuing, stream-processing systems


Education, Experience and Licensing Requirements:

  • Degree in Computer Science, IT, or similar field; a Master’s is a plus
  • 3+ years of hands on development experience
  • 3+ years of SQL experience (No-SQL experience is a plus)
  • 3+ years of experience with schema design and dimensional data modeling
  • Experience designing, building and maintaining data processing systems


Read more
at Deltacubes
6 recruiters
DP
Posted by Bavithra Kanniyappan
Remote only
5 - 12 yrs
₹10L - ₹15L / yr
Python
Amazon Web Services (AWS)
PySpark
Scala
Spark
+3 more

Hiring - Python Developer Freelance Consultant (WFH-Remote)

Greetings from Deltacubes Technology!!

 

Skillset Required:

Python

Pyspark

AWS

Scala

 

Experience:

5+ years

 

Thanks

Bavithra

 

Read more
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
10 - 15 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Amazon Web Services (AWS)
Migration

Greetings !!!


Looking Urgently !!!


Exp-Min 10 Years

Location-Delhi

Sal-nego



Role

AWS Data Migration Consultant

Provide Data Migration strategy, expert review and guidance on Data Migration from onprem to AWS infrastructure that includes AWS Fargate, PostgreSQL, DynamoDB. This includes review and SME inputs on:

·       Data migration plan, architecture, policies, procedures

·       Migration testing methodologies

·       Data integrity, consistency, resiliency.

·       Performance and Scalability

·       Capacity planning

·       Security, access control, encryption

·       DB replication and clustering techniques

·       Migration risk mitigation approaches

·       Verification and integrity testing, reporting (Record and field level verifications)

·       Schema consistency and mapping

·       Logging, error recovery

·       Dev-test, staging and production artifact promotions and deployment pipelines

·       Change management

·       Backup, DR approaches and best practices.


Qualifications

  • Worked on mid to large scale data migration projects, specifically from on-prem to AWS, preferably in BFSI domain
  • Deep expertise in AWS Redshift, PostgreSQL, DynamoDB from data management, performance, scalability and consistency standpoint
  • Strong knowledge of AWS Cloud architecture and components, solutions, well architected frameworks
  • Expertise in SQL and DB performance related aspects
  • Solution Architecture work for enterprise grade BFSI applications
  • Successful track record of defining and implementing data migration strategies
  • Excellent communication and problem solving skills
  • 10+ Yrs experience in Technology, at least 4+yrs in AWS and DBA/DB Management/Migration related work
  • Bachelors degree or higher in Engineering or related field


Read more
AI-powered cloud-based SaaS solution provider
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
8 - 15 yrs
₹25L - ₹60L / yr
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
+20 more
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Read more
at Klubworks
4 recruiters
DP
Posted by Anupam Arya
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹16L / yr
Python
Amazon Web Services (AWS)
Amazon Redshift
PySpark
MySQL
+2 more
About the role
 
You will be building data pipelines that transform raw, unstructured data into formats data scientists can use for analysis. You will be responsible for creating and maintaining the analytics infrastructure that enables almost every other data function. This includes architectures such as databases, servers, and large-scale processing systems.
 
Responsibilities:
  • Responsible for setting up a  scalable Data warehouse
  • Building data pipeline mechanisms  to integrate the data from various sources for all of Klub’s data.  
  • Setup data as a service to expose the needed data as part of APIs. 
  • Have a good understanding on how the finance data works.
  • Standardize and optimize design thinking across the technology team.
  • Collaborate with stakeholders across engineering teams to come up with short and long-term architecture decisions.
  • Build robust data models that will help to support various reporting requirements for the business , ops and the leadership team. 
  • Participate in peer reviews , provide code/design comments.
  • Own the problem and deliver to success.

Requirements:
  • Overall 1+ years of industry experience
  • Prior experience on Backend and Data Engineering systems
  • Should have at least 1 + years of working experience in distributed systems
  • Deep understanding on python tech stack with the libraries like Flask, scipy, numpy, pytest frameworks.
  • Good understanding of Apache Airflow or similar orchestration tools. 
  • Good knowledge on data warehouse technologies like Apache Hive or similar. Good knowledge on Apache PySpark or similar. 
  • Good knowledge on how to build analytics services on the data for different reporting and BI needs. 
  • Good knowledge on data pipeline/ETL tools Hevo data or similar. Good knowledge on Trino / graphQL or similar query engine technologies. 
  • Deep understanding of concepts on Dimensional Data Models. Familiarity with RDBMS (MySQL/ PostgreSQL) , NoSQL (MongoDB/DynamoDB) databases & caching(redis or similar).
  • Should be proficient in writing SQL queries. 
  • Good knowledge on kafka. Be able to write clean, maintainable code.
 
Nice to have
  • Built a Data Warehouse from the scratch and set up a scalable data infrastructure.
  • Prior experience in fintech would be a plus.
  • Prior experience on data modelling.
Read more
cloud Transformation products, frameworks and services
Agency job
via The Hub by Sridevi Viswanathan
Remote only
3 - 8 yrs
₹20L - ₹26L / yr
Airflow
Amazon Redshift
Amazon Web Services (AWS)
Java
ETL
+4 more
  • Experience with Cloud native Data tools/Services such as AWS Athena, AWS Glue, Redshift Spectrum, AWS EMR, AWS Aurora, Big Query, Big Table, S3, etc.

 

  • Strong programming skills in at least one of the following languages: Java, Scala, C++.

 

  • Familiarity with a scripting language like Python as well as Unix/Linux shells.

 

  • Comfortable with multiple AWS components including RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. Equivalent tools in the GCP stack will also suffice.

 

  • Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.

 

  • Experience implementing software around data processing, metadata management, and ETL pipeline tools like Airflow.

 

Experience with the following software/tools is highly desired:

 

  • Apache Spark, Kafka, Hive, etc.

 

  • SQL and NoSQL databases like MySQL, Postgres, DynamoDB.

 

  • Workflow management tools like Airflow.

 

  • AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR.

 

  • Familiarity with Spark programming paradigms (batch and stream-processing).

 

  • RESTful API services.
Read more
at ORBO
4 recruiters
DP
Posted by Neha T
Mumbai, Delhi
4 - 7 yrs
₹8L - ₹22L / yr
OpenCV
Image Processing
Image segmentation
Deep Learning
Python
+7 more

Who Are We

 

A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.

 

ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.

 

WHY US

  • Join top AI company
  • Grow with your best companions
  • Continuous pursuit of excellence, equality, respect
  • Competitive compensation and benefits

You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.

 

To learn more about how we work, please check out

https://www.orbo.ai/.

 

Description:

We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.

 

Responsibilities:

  • Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
  • Lead a team of ML engineers in developing an industrial AI product from scratch
  • Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
  • Tune the models to achieve high accuracy rates and minimum latency
  • Deploying developed computer vision models on edge devices after optimization to meet customer requirements

 

 

Requirements:

  • Bachelor’s degree
  • Understanding about depth and breadth of computer vision and deep learning algorithms.
  • 4+ years of industrial experience in computer vision and/or deep learning
  • Experience in taking an AI product from scratch to commercial deployment.
  • Experience in Image enhancement, object detection, image segmentation, image classification algorithms
  • Experience in deployment with OpenVINO, ONNXruntime and TensorRT
  • Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
  • Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
  • Proficient understanding of code versioning tools, such as Git

Our perfect candidate is someone that:

  • is proactive and an independent problem solver
  • is a constant learner. We are a fast growing start-up. We want you to grow with us!
  • is a team player and good communicator

 

What We Offer:

  • You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
  • You will be in charge of what you build and be an integral part of the product development process
  • Technical and financial growth!
Read more
A Product Company
Agency job
via wrackle by Lokesh M
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹26L / yr
Looker
Big Data
Hadoop
Spark
Apache Hive
+4 more
Job Title: Senior Data Engineer/Analyst
Location: Bengaluru
Department: - Engineering 

Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work. 

Responsibilities 
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
●  Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions 
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers. 

Requirements 
● 3-5 years of strong experience in data analytics and in developing data pipelines. 
● Very good expertise in Looker 
● Strong in data modeling, developing SQL queries and optimizing queries. 
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive). 
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera) 
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Read more
at Fragma Data Systems
8 recruiters
DP
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
3 - 9 yrs
₹8L - ₹20L / yr
PySpark
Data engineering
Data Engineer
Windows Azure
ADF
+2 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
at Public Vibe
1 recruiter
DP
Posted by Dhaneesha Dominic
Hyderabad
1 - 3 yrs
₹1L - ₹3L / yr
Java
Data Science
Python
Natural Language Processing (NLP)
Scala
+3 more
Hi Candidates, Greetings From Publicvibe !!! We are Hiring NLP Engineers/ Data scientists in between 0.6 to 2.5 Years of Experience for our Hyderabad location, if anyone looking out for opportunities or Job change, reach out to us. Regards, Dhaneesha Dominic.
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Futurense Technologies?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort