![American Multinational Retail Corp's logo](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fdefault_company_picture.jpg&w=3840&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
Should have Passion to learn and adapt new technologies, understanding,
solving/troubleshooting issues and risks, able to make informed decisions and ability to
lead the projects.
Your Qualifications
- 2-5 Years’ Experience with functional programming
- Experience with functional programming using Scala with Spark framework.
- Strong understanding of Object-oriented programming, data structures and algorithms
- Good experience in any of the cloud platforms (Azure, AWS, GCP) etc.,
- Experience with distributed (multi-tiered) systems, relational databases and NoSql storage solutions
- Desire to learn new technologies and languages
- Participation in software design, development, and code reviews
- High level of proficiency with Computer Science/Software Engineering knowledge and contribution to the technical skills growth of other team members
Your Responsibility
- Design, build and configure applications to meet business process and application requirements
- Proactively identify and communicate potential issues and concerns and recommend/implement alternative solutions as appropriate.
- Troubleshooting & Optimization of existing solution
Provide advice on technical design to ensure solutions are forward looking and flexible for potential future requirements and business needs.
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
Similar jobs
Job Description: Data Engineer
Experience: Over 4 years
Responsibilities:
- Design, develop, and maintain scalable data pipelines for efficient data extraction, transformation, and loading (ETL) processes.
- Architect and implement data storage solutions, including data warehouses, data lakes, and data marts, aligned with business needs.
- Implement robust data quality checks and data cleansing techniques to ensure data accuracy and consistency.
- Optimize data pipelines for performance, scalability, and cost-effectiveness.
- Collaborate with data analysts and data scientists to understand data requirements and translate them into technical solutions.
- Develop and maintain data security measures to ensure data privacy and regulatory compliance.
- Automate data processing tasks using scripting languages (Python, Bash) and big data frameworks (Spark, Hadoop).
- Monitor data pipelines and infrastructure for performance and troubleshoot any issues.
- Stay up to date with the latest trends and technologies in data engineering, including cloud platforms (AWS, Azure, GCP).
- Document data pipelines, processes, and data models for maintainability and knowledge sharing.
- Contribute to the overall data governance strategy and best practices.
Qualifications:
- Strong understanding of data architectures, data modelling principles, and ETL processes.
- Proficiency in SQL (e.g., MySQL, PostgreSQL) and experience with big data querying languages (e.g., Hive, Spark SQL).
- Experience with scripting languages (Python, Bash) for data manipulation and automation.
- Experience with distributed data processing frameworks (Spark, Hadoop) (preferred).
- Familiarity with cloud platforms (AWS, Azure, GCP) for data storage and processing (a plus).
- Experience with data quality tools and techniques.
- Excellent problem-solving, analytical, and critical thinking skills.
- Strong communication, collaboration, and teamwork abilities.
About Us:
6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do
sales and marketing. It works with big data at scale, advanced machine learning and
predictive modelling to find buyers and predict what they will purchase, when and
how much.
6sense helps B2B marketing and sales organizations fully understand the complex ABM
buyer journey. By combining intent signals from every channel with the industry’s most
advanced AI predictive capabilities, it is finally possible to predict account demand and
optimize demand generation in an ABM world. Equipped with the power of AI and the
6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,
and engage buyers to drive more revenue.
6sense is seeking a Staff Software Engineer and data to become part of a team
designing, developing, and deploying its customer-centric applications.
We’ve more than doubled our revenue in the past five years and completed our Series
E funding of $200M last year, giving us a stable foundation for growth.
Responsibilities:
1. Own critical datasets and data pipelines for product & business, and work
towards direct business goals of increased data coverage, data match rates, data
quality, data freshness
2. Create more value from various datasets with creative solutions, and unlocking
more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble
large, complex data sets that meet functional and non-functional business
requirements
4. Improving our current data pipelines i.e. improve their performance, SLAs,
remove redundancies, and figure out a way to test before v/s after roll out
5. Identify, design, and implement process improvements in data flow across
multiple stages and via collaboration with multiple cross functional teams eg.
automating manual processes, optimising data delivery, hand-off processes etc.
6. Work with cross function stakeholders including the Product, Data Analytics ,
Customer Support teams for their enablement for data access and related goals
7. Build for security, privacy, scalability, reliability and compliance
8. Mentor and coach other team members on scalable and extensible solutions
design, and best coding standards
9. Help build a team and cultivate innovation by driving cross-collaboration and
execution of projects across multiple teams
Requirements:
8-10+ years of overall work experience as a Data Engineer
Excellent analytical and problem-solving skills
Strong experience with Big Data technologies like Apache Spark. Experience with
Hadoop, Hive, Presto would-be a plus
Strong experience in writing complex, optimized SQL queries across large data
sets. Experience with optimizing queries and underlying storage
Experience with Python/ Scala
Experience with Apache Airflow or other orchestration tools
Experience with writing Hive / Presto UDFs in Java
Experience working on AWS cloud platform and services.
Experience with Key Value stores or NoSQL databases would be a plus.
Comfortable with Unix / Linux command line
Interpersonal Skills:
You can work independently as well as part of a team.
You take ownership of projects and drive them to conclusion.
You’re a good communicator and are capable of not just doing the work, but also
teaching others and explaining the “why” behind complicated technical
decisions.
You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll
want you to evolve with it
We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.
Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree
Job Responsibilities:
• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team
• Have minimum 3 years of AWS Cloud experience.
• Well versed in languages such as Python, PySpark, SQL, NodeJS etc
• Has extensive experience in the real-timeSpark ecosystem and has worked on both real time and batch processing
• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.
• Experience with modern Database systems such as Redshift, Presto, Hive etc.
• Worked on building data lakes in the past on S3 or Apache Hudi
• Solid understanding of Data Warehousing Concepts
• Good to have experience on tools such as Kafka or Kinesis
• Good to have AWS Developer Associate or Solutions Architect Associate Certification
• Have experience in managing a team
The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
Responsibilities for Data Engineer
• Create and maintain optimal data pipeline architecture,
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
• Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
• Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
• Experience building and optimizing big data ETL pipelines, architectures and data sets.
• Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
• Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
• A successful history of manipulating, processing and extracting value from large disconnected
datasets.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions.
If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities.
Who are you?
- Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving
- Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions
- You use cloud for more than just provisioning a Virtual Machine
- Vim is friendly to you and you know how to exit Nano
- You check logs before screaming about an error
- You are a solid engineer who writes modular code and commits in GIT
- You are a doer who doesn’t say “no” without first understanding
- You understand the value of documentation of your work
- You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines
Eligibility
Experience
- At least 2 years of Data Engineering Experience
- Have interacted with Customers
Must Have Skills
- Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES
- Apache Spark
- Python
- Scala
- PostgreSQL
- Git
- Linux
Good to have Skills
- Apache NiFi
- Apache Kafka
- Apache Hive
- Docker
- Amazon Certification
Function : Sr. DB Developer
Location : India/Gurgaon/Tamilnadu
>> THE INDIVIDUAL
- Have a strong background in data platform creation and management.
- Possess in-depth knowledge of Data Management, Data Modelling, Ingestion - Able to develop data models and ingestion frameworks based on client requirements and advise on system optimization.
- Hands-on experience in SQL database (PostgreSQL) and No-SQL database (MongoDB)
- Hands-on experience in performance tuning of DB
- Good to have knowledge of database setup in cluster node
- Should be well versed with data security aspects and data governance framework
- Hands-on experience in Spark, Airflow, ELK.
- Good to have knowledge on any data cleansing tool like apache Griffin
- Preferably getting involved during project implementation so have a background on business knowledge and technical requirement as well.
- Strong analytical and problem-solving skills. Have exposure to data analytics skills and knowledge of advanced data analytical tools will be an advantage.
- Strong written and verbal communication skills (presentation skills).
- Certifications in the above technologies is preferred.
>> Qualification
- Tech /B.E. / MCA /M. Tech from a reputed institute.
Experience of Data Management, Data Modelling, Ingestion for more than 4 years. Total experience of 8-10 Years
SpringML is looking to hire a top-notch Senior Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.
RESPONSIBILITIES:
- Ability to work as a member of a team assigned to design and implement data integration solutions.
- Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
- Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
- Propose design solutions and recommend best practices for large scale data analysis
SKILLS:
- B.tech degree in computer science, mathematics or other relevant fields.
- 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
- Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
- Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
- Experience with Agile implementation methodologies
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)