Cutshort logo

50+ Big data Jobs in India

Apply to 50+ Big data Jobs on CutShort.io. Find your next job, effortlessly. Browse Big data Jobs and apply today!

icon
Nirmitee.io

at Nirmitee.io

4 recruiters
Disha Karia
Posted by Disha Karia
Pune
3 - 7 yrs
₹12L - ₹18L / yr
Artificial Intelligence (AI)
Generative AI
skill iconMachine Learning (ML)
Clustering
Big Data
+1 more

Position: ML/AI Engineer (3+ years)


Responsibilities:

Design, develop, and deploy machine learning models with a focus on scalability and performance.

Implement unsupervised algorithms and handle large datasets for inference and analysis.

Collaborate with cross-functional teams to integrate AI/ML solutions into real-world applications.

Work on MLOps frameworks for model lifecycle management (optional but preferred).

Stay updated with the latest advancements in AI/ML technologies.


Requirements:

Proven experience in machine learning model building and deployment.

Hands-on experience with unsupervised learning algorithms (e.g., clustering, anomaly detection).

Proficiency in handling Big Data and related tools/frameworks.

Exposure to MLOps tools like MLflow, Kubeflow, or similar (preferred).

Familiarity with cloud platforms such as AWS, Azure, or GCP.


Location: Baner, Pune

Employment Type: Full-time

Read more
 One of the most reputed service groups in Oman

One of the most reputed service groups in Oman

Muscat, Oman
5 - 9 yrs
₹10L - ₹15L / yr
PowerBI
Spotfire
Qlikview
Tableau
Data Visualization
+6 more

Job Description: PEL Data Analyst

Job Title PEL Data Analyst / PEL ADMIN OFFICER

About the Group

Our Client is one of the most reputed service groups in Oman’s construction and mining industry. The organization has grown from a small family

business to one that leads the industry in construction contracting, manufacturing of cement products, building finishes products, roads, asphalt &

infrastructure works and mining amongst other product offerings. The Group has achieved this by basing everything it does on its core values of HSE

& Quality. With a diverse team of over 22,000 employees, The Group endeavours to serve the Sultanate with international quality products &

services to meet the demands of the growing nation.

Purpose of the job

Responsible for all day-to-day activities supporting the improvement of organizational effectiveness by working with extended teams & functions to

develop & implement; high-quality reports, dashboards and performance indicators for the PEL Division.


Key Responsibilities & Activities:

1. Responsible for data modeling and analysis.

2. Responsible for deep data dives and extremely detailed analysis of various data points collected during the need identification phase.

Organize and present data to line management for appropriate guidance and subsequent process mapping.

3. Responsible for rigorous periodic reviews of various individual, functional, business unit, and group metrics and indicators of success.

Through but not limited to – key productivity indicators, result areas, heath meter, performance goals etc. Report results to line

management and support the decision-making process.

4. Responsible for the development of high-quality analytical reports, and training packs.

5. Responsible for all individual and specific departmental productivity targets, financial objectives, KPIs and attainment thereof.

Other tasks:

1. Promoting The Group Values, Code of Conduct and associated policies

2. Participating and providing positive contributions to technical, commercial, planning, safety and quality performance of the organization

and by Client, Contractual, and regulatory requirements

3. Visiting Sites and Project locations to discuss operational aspects and carry out training.

4. Undertaking any other responsibilities as directed and mutually agreed with the Line Management.

The above list is not exhaustive. Individuals may be required to perform additional job-related tasks and duties as assigned.

Educational Qualifications

• Bachelor’s degree in engineering

• Masters (preferred)

Professional Certifications • Certifications related to Data Analyst or Data Scientist (preferred)

Skills

• Advanced Excel, Macros, & MS Power Query and MS Power BI, Google Data Studio (Looker Studio)

• Knowledge of Python, M Code and DAX is a plus

• Data management, Bigdata, Data analysis

• Attention to details and strong analytical skills.

• Strong data management skills

Experience • 4 years position specific experience

• 7 years overall professional experience

Language

• English (fluent)

• Arabic (preferred)



Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Bengaluru (Bangalore)
8 - 11 yrs
₹20L - ₹40L / yr
skill iconMachine Learning (ML)
Cloud Computing
Fullstack Developer
skill iconKubernetes
skill iconPython
+14 more

 

 

Job Title: Solution Architect

Work Location: Tokyo

Experience: 7-10 years

Number of Positions: 3

Job Description:

We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.

Responsibilities:

  • Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
  • Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
  • Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
  • Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
  • Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
  • Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
  • Contribute to the development of technical documentation and roadmaps.
  • Stay up-to-date with emerging technologies and propose enhancements to the solution design process.

Key Skills & Requirements:

  • Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
  • Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
  • Solid experience with Kubernetes for container orchestration and deployment.
  • Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
  • Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
  • Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
  • Strong experience in designing scalable architectures and applications from the ground up.
  • Experience with DevOps and automation tools for CI/CD pipelines.
  • Excellent problem-solving skills and ability to work in a fast-paced environment.
  • Strong communication skills and ability to collaborate with cross-functional teams.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.

Preferred Skills:

  • Experience with microservices architecture and containerization.
  • Knowledge of distributed systems and high-performance computing.
  • Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
  • Familiarity with Agile methodologies and Scrum.
  • Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.

 

Read more
Cornertree

at Cornertree

1 recruiter
Deepesh Shrimal
Posted by Deepesh Shrimal
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 10 yrs
₹15L - ₹30L / yr
Cassandra
PySpark
Data engineering
Big Data
Hadoop
+3 more

Skills:

Experience with Cassandra, including installing configuring and monitoring a Cassandra cluster.

Experience with Cassandra data modeling and CQL scripting. Experience with DataStax Enterprise Graph

Experience with both Windows and Linux Operating Systems. Knowledge of Microsoft .NET Framework (C#, NETCore).

Ability to perform effectively in a team-oriented environment

Read more
Smartavya Analytica

Smartavya Analytica

Agency job
via Pluginlive by Joslyn Gomes
Mumbai
10 - 15 yrs
₹15L - ₹25L / yr
Datawarehousing
Data Warehouse (DWH)
ETL
Data Visualization
Big Data
+7 more

Experience: 12-15 Years with 7 years in Big Data, Cloud, and Analytics. 

Key Responsibilities: 

  • Technical Project Management:
  • o Lead the end-to-end technical delivery of multiple projects in Big Data, Cloud, and Analytics. Lead teams in technical solutioning, design and development
  • o Develop detailed project plans, timelines, and budgets, ensuring alignment with client expectations and business goals.
  • o Monitor project progress, manage risks, and implement corrective actions as needed to ensure timely and quality delivery.
  • Client Engagement and Stakeholder Management:
  • o Build and maintain strong client relationships, acting as the primary point of contact for project delivery.
  • o Understand client requirements, anticipate challenges, and provide proactive solutions.
  • o Coordinate with internal and external stakeholders to ensure seamless project execution.
  • o Communicate project status, risks, and issues to senior management and stakeholders in a clear and timely manner.
  • Team Leadership:
  • o Lead and mentor a team of data engineers, analysts, and project managers.
  • o Ensure effective resource allocation and utilization across projects.
  • o Foster a culture of collaboration, continuous improvement, and innovation within the team.
  • Technical and Delivery Excellence:
  • o Leverage Data Management Expertise and Experience to guide and lead the technical conversations effectively. Identify and understand technical areas of support needed to the team and work towards resolving them – either by own expertise or networking with internal and external stakeholders to unblock the team
  • o Implement best practices in project management, delivery, and quality assurance.
  • o Drive continuous improvement initiatives to enhance delivery efficiency and client satisfaction.
  • o Stay updated with the latest trends and advancements in Big Data, Cloud, and Analytics technologies.

Requirements:

  • Experience in IT delivery management, particularly in Big Data, Cloud, and Analytics.
  • Strong knowledge of project management methodologies and tools (e.g., Agile, Scrum, PMP).
  • Excellent leadership, communication, and stakeholder management skills.
  • Proven ability to manage large, complex projects with multiple stakeholders.
  • Strong critical thinking skills and the ability to make decisions under pressure.

Preferred Qualifications:

  • Bachelor’s degree in computer science, Information Technology, or a related field.
  • Relevant certifications in Big Data, Cloud platforms like GCP, Azure, AWS, Snowflake, Databricks, Project Management or similar areas is preferred.
Read more
Smartavya Analytica

Smartavya Analytica

Agency job
via Pluginlive by Joslyn Gomes
Mumbai
12 - 15 yrs
₹30L - ₹35L / yr
Hadoop
Cloudera
HDFS
Apache Hive
Apache Impala
+3 more

Experience: 12-15 Years

Key Responsibilities: 

  • Client Engagement & Requirements Gathering: Independently engage with client stakeholders to
  • understand data landscapes and requirements, translating them into functional and technical specifications.
  • Data Architecture & Solution Design: Architect and implement Hadoop-based Cloudera CDP solutions,
  • including data integration, data warehousing, and data lakes.
  • Data Processes & Governance: Develop data ingestion and ETL/ELT frameworks, ensuring robust data governance and quality practices.
  • Performance Optimization: Provide SQL expertise and optimize Hadoop ecosystems (HDFS, Ozone, Kudu, Spark Streaming, etc.) for maximum performance.
  • Coding & Development: Hands-on coding in relevant technologies and frameworks, ensuring project deliverables meet stringent quality and performance standards.
  • API & Database Management: Integrate APIs and manage databases (e.g., PostgreSQL, Oracle) to support seamless data flows.
  • Leadership & Mentoring: Guide and mentor a team of data engineers and analysts, fostering collaboration and technical excellence.

Skills Required:

  • a. Technical Proficiency:
  • • Extensive experience with Hadoop ecosystem tools and services (HDFS, YARN, Cloudera
  • Manager, Impala, Kudu, Hive, Spark Streaming, etc.).
  • • Proficiency in programming languages like Spark, Python, Scala and a strong grasp of SQL
  • performance tuning.
  • • ETL tool expertise (e.g., Informatica, Talend, Apache Nifi) and data modelling knowledge.
  • • API integration skills for effective data flow management.
  • b. Project Management & Communication:
  • • Proven ability to lead large-scale data projects and manage project timelines.
  • • Excellent communication, presentation, and critical thinking skills.
  • c. Client & Team Leadership:
  • • Engage effectively with clients and partners, leading onsite and offshore teams.


Read more
Affine
Rishika Chadha
Posted by Rishika Chadha
Remote only
5 - 8 yrs
Best in industry
skill iconScala
ETL
Apache Kafka
Object Oriented Programming (OOPs)
CI/CD
+4 more

Role Objective:


Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles & Responsibilities:

  • Sound knowledge in Spark architecture and distributed computing and Spark streaming.
  • Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
  • SFDC(Data modelling experience) would be given preference
  • Good understanding in object-oriented concepts and hands on experience on Scala with excellent programming logic and technique.
  • Good in functional programming and OOPS concept on Scala
  • Good experience in SQL – should be able to write complex queries.
  • Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
  • Able to mentor new members for onboarding to the project.
  • Understand the client requirement and able to design, develop from scratch and deliver.
  • AWS cloud experience would be preferable.
  • Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (preferred)
  • Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (preferred)
  • Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on AWS (preferred)
  • Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
  • Managing project timing, client expectations and meeting deadlines.
  • Should have played project and team management roles.
  • Facilitate meetings within the team on regular basis.
  • Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
  • Optimization, maintenance, and support of pipelines.
  • Strong analytical and logical skills.
  • Ability to comfortably tackling new challenges and learn
Read more
Eastvantage private limited
Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹20L / yr
Qualtrics
SQL
Big Data

As a Senior Analyst you will play a crucial role in improving customer experience, retention, and growth by identifying opportunities in the moment that matter and surfacing them to teams across functions. You have experience with collecting, stitching and analyzing Voice of Customer and CX data, and are passionate about customer feedback. You will partner with the other Analysts on the team, as well as the Directors, in delivering impactful presentations that drive action across the organization.

You come with an understanding of customer feedback and experience analytics and the ability to tell a story with data. You are a go-getter who can take a request and run with it independently but also does not shy away from collaborating with the other team members. You are flexible and thrive in a fast-paced environment with changing priorities.

Responsibilities

  • Stitch and analyze data from primary/secondary sources to determine key drivers of customer success, loyalty, risk, churn and overall experience.
  • Verbalize and translate these insights into actionable tasks.
  • Develop monthly and ad-hoc reporting.
  • Maintain Qualtrics dashboards and surveys.
  • Take on ad-hoc tasks related to XM platform onboarding, maintenance or launch of new tools
  • Understand data sets within the Data Services Enterprise Data platform and other systems
  • Translate user requirements independently
  • Work with Business Insights, IT and technical teams
  • Create PowerPoint decks and present insights to stakeholders

Qualifications

  • Bachelor’s degree in data science/Analytics, Statistics or Business. Master’s degree is a plus.
  • Extract data and customer insights, analyze audio or speech, and present them in any visualization tool.
  • 5+ years of experience in an analytical, customer insights or related business function.
  • Basic knowledge of SQL and Google Big Query.
  • 1+ year of experience working with Qualtrics, Medallia or another Experience Management platform
  • Hands-on experience with statistical techniques: profiling, regression analysis, trend analysis, segmentation
  • Well-organized and high energy

Non-technical requirements:

  • You have experience working on client-facing roles.
  • You are available to work from our Bangalore office from Day 1 in a night shift. (US Shift)
  • You have strong communication skills.
  • You have strong analytical skills.


Read more
Hyderabad
3 - 6 yrs
₹10L - ₹16L / yr
SQL
Spark
Analytical Skills
Hadoop
Communication Skills
+4 more

The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.


Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.


Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.


Actively participates with other consultants in problem-solving and approach development.


Responsibilities :


Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.


Perform data analysis to validate data models and to confirm the ability to meet business needs.


Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.


Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.


Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.


Coordinate with Data Architects, Program Managers and participate in recurring meetings.


Help and mentor team members to understand the data model and subject areas.


Ensure that the team adheres to best practices and guidelines.


Requirements :


- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.


- Experience with Spark optimization/tuning/resource allocations


- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.


- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).


- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.


- Have a deep understanding of the various stacks and components of the Big Data ecosystem.


- Hands-on experience with Python is a huge plus

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 4 yrs
₹8L - ₹20L / yr
skill iconPython
PySpark
ETL
databricks
Azure
+6 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe. 

 

 

We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English. 

 

 

We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives. 

 

 

Skills Required 

  • Experience in the manufacturing industry (metal industry is a plus)  
  • 2+ years of experience as a Data Engineer 
  • Experience in data cleaning & structuring and data manipulation 
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines. 
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation. 
  • Experience in SQL and data structures  
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases. 
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform. 
  • Proficient in data management and data governance  
  • Strong analytical and problem-solving skills. 
  • Excellent communication and teamwork abilities. 

 


Nice To Have 

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database). 
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud. 


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 6 yrs
₹8L - ₹25L / yr
SQL Azure
databricks
skill iconPython
SQL
ETL
+9 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.


We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.


We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.


Skills Required:


  • Experience in the manufacturing industry (metal industry is a plus)
  • 4+ years of experience as a Data Engineer
  • Experience in data cleaning & structuring and data manipulation
  • Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
  • Experience in SQL and data structures
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
  • Proficient in data management and data governance
  • Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork abilities.


Nice To Have:

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
  • Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
  • Benefits And Perks
  • A culture that fosters innovation, creativity, continuous learning, and resilience
  • Progressive leave policy promoting work-life balance
  • Mentorship opportunities with highly qualified internal resources and industry-driven programs
  • Multicultural peer groups and supportive workplace policies
  • Annual workcation program allowing you to work from various scenic locations
  • Experience the unique environment of a dynamic start-up


Why should you join TVARIT ?


Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.


If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Read more
Smartavya

Smartavya

Agency job
via Pluginlive by Harsha Saggi
Mumbai
10 - 18 yrs
₹35L - ₹40L / yr
Hadoop
Architecture
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
PySpark
+13 more
  • Architectural Leadership:
  • Design and architect robust, scalable, and high-performance Hadoop solutions.
  • Define and implement data architecture strategies, standards, and processes.
  • Collaborate with senior leadership to align data strategies with business goals.
  • Technical Expertise:
  • Develop and maintain complex data processing systems using Hadoop and its ecosystem (HDFS, YARN, MapReduce, Hive, HBase, Pig, etc.).
  • Ensure optimal performance and scalability of Hadoop clusters.
  • Oversee the integration of Hadoop solutions with existing data systems and third-party applications.
  • Strategic Planning:
  • Develop long-term plans for data architecture, considering emerging technologies and future trends.
  • Evaluate and recommend new technologies and tools to enhance the Hadoop ecosystem.
  • Lead the adoption of big data best practices and methodologies.
  • Team Leadership and Collaboration:
  • Mentor and guide data engineers and developers, fostering a culture of continuous improvement.
  • Work closely with data scientists, analysts, and other stakeholders to understand requirements and deliver high-quality solutions.
  • Ensure effective communication and collaboration across all teams involved in data projects.
  • Project Management:
  • Lead large-scale data projects from inception to completion, ensuring timely delivery and high quality.
  • Manage project resources, budgets, and timelines effectively.
  • Monitor project progress and address any issues or risks promptly.
  • Data Governance and Security:
  • Implement robust data governance policies and procedures to ensure data quality and compliance.
  • Ensure data security and privacy by implementing appropriate measures and controls.
  • Conduct regular audits and reviews of data systems to ensure compliance with industry standards and regulations.
Read more
Sadup Softech

at Sadup Softech

1 recruiter
madhuri g
Posted by madhuri g
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Must have skills


3 to 6 years

Data Science

SQL, Excel, Big Query - mandate 3+ years

Python/ML, Hadoop, Spark - 2+ years


Requirements


• 3+ years prior experience as a data analyst

• Detail oriented, structural thinking and analytical mindset.

• Proven analytic skills, including data analysis and data validation.

• Technical writing experience in relevant areas, including queries, reports, and presentations.

• Strong SQL and Excel skills with the ability to learn other analytic tools

• Good communication skills (being precise and clear)

• Good to have prior knowledge of python and ML algorithms

Read more
Nyteco

at Nyteco

2 candid answers
1 video
Alokha Raj
Posted by Alokha Raj
Remote only
4 - 6 yrs
₹17L - ₹20L / yr
Data Transformation Tool (DBT)
ETL
SQL
Big Data
Google Cloud Platform (GCP)
+2 more

Join Our Journey

Jules develops an amazing end-to-end solution for recycled materials traders, importers and exporters. Which means a looooot of internal, structured data to play with in order to provide reporting, alerting and insights to end-users. With about 200 tables, covering all business processes from order management, to payments including logistics, hedging and claims, the wealth the data entered in Jules can unlock is massive. 


After working on a simple stack made of PostGres, SQL queries and a visualization solution, the company is now ready to set-up its data stack and only misses you. We are thinking DBT, Redshift or Snowlake, Five Tran, Metabase or Luzmo etc. We also have an AI team already playing around text driven data interaction. 


As a Data Engineer at Jules AI, your duties will involve both data engineering and product analytics, enhancing our data ecosystem. You will collaborate with cross-functional teams to design, develop, and sustain data pipelines, and conduct detailed analyses to generate actionable insights.


Roles And Responsibilities:

  • Work with stakeholders to determine data needs, and design and build scalable data pipelines.
  • Develop and sustain ELT processes to guarantee timely and precise data availability for analytical purposes.
  • Construct and oversee large-scale data pipelines that collect data from various sources.
  • Expand and refine our DBT setup for data transformation.
  • Engage with our data platform team to address customer issues.
  • Apply your advanced SQL and big data expertise to develop innovative data solutions.
  • Enhance and debug existing data pipelines for improved performance and reliability.
  • Generate and update dashboards and reports to share analytical results with stakeholders.
  • Implement data quality controls and validation procedures to maintain data accuracy and integrity.
  • Work with various teams to incorporate analytics into product development efforts.
  • Use technologies like Snowflake, DBT, and Fivetran effectively.


Mandatory Qualifications:

  • Hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Possess at least 4 years of experience in Data Engineering, ETL Building, database management, and Data Warehousing.
  • Demonstrated expertise as an Analytics Engineer or in a similar role.
  • Proficient in SQL, a scripting language (Python), and a data visualization tool.
  • Mandatory experience in working with DBT.
  • Experience in working with Airflow, and cloud platforms like AWS, GCP, or Snowflake.
  • Deep knowledge of ETL/ELT patterns.
  • Require at least 1 year of experience in building Data pipelines and leading data warehouse projects.
  • Experienced in mentoring data professionals across all levels, from junior to senior.
  • Proven track record in establishing new data engineering processes and navigating through ambiguity.
  • Preferred Skills: Knowledge of Snowflake and reverse ETL tools is advantageous.


Grow, Develop, and Thrive With Us

  • Global Collaboration: Work with a dynamic team that’s making an impact across the globe, in the recycling industry and beyond. We have customers in India, Singapore, United-States, Mexico, Germany, France and more
  • Professional Growth: a highway toward setting-up a great data team and evolve into a leader
  • Flexible Work Environment: Competitive compensation, performance-based rewards, health benefits, paid time off, and flexible working hours to support your well-being.


Apply to us directly : https://nyteco.keka.com/careers/jobdetails/41442

Read more
Product company

Product company

Agency job
via SangatHR by Valli Subramaniam
Guindy
3 - 6 yrs
₹5L - ₹12L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+7 more

Python Data Engineer

Job Description:

• Design, develop, and maintain database scripts and procedures to support

application requirements.

• Collaborate with software developers to integrate database scripts with

application code.

• Troubleshoot and resolve database issues in a timely manner.

• Perform database maintenance tasks, such as backups, restores, and migrations.

• Implement data security measures to protect sensitive information.

• Develop and maintain documentation for database scripts and procedures.

• Stay up-to-date with emerging technologies and best practices in database

management.


Job Requirements:

• Bachelor’s degree in Computer Science, Information Technology, or related field.

• 3+ years of Proven experience as a Database Engineer or similar role with python

• Proficiency in SQL and scripting languages such as Python or Js.

• Strong understanding of database management systems, including relational

databases (e.g., MySQL, PostgreSQL, SQL Server) and NoSQL databases (e.g.,

MongoDB, Cassandra).

• Experience with database design principles and data modelling techniques.

• Knowledge of database optimisation techniques and performance tuning.

• Familiarity with version control systems (e.g., Git) and continuous integration

tools.

• Excellent problem-solving skills and attention to detail.

• Strong communication and collaboration skills.

Read more
Sadup Softech

at Sadup Softech

1 recruiter
madhuri g
Posted by madhuri g
Remote only
4 - 6 yrs
₹4L - ₹15L / yr
Google Cloud Platform (GCP)
big query
PySpark
Data engineering
Big Data
+2 more

Job Description:

We are seeking a talented Machine Learning Engineer with expertise in software engineering to join our team. As a Machine Learning Engineer, your primary responsibility will be to develop machine learning (ML) solutions that focus on technology process improvements. Specifically, you will be working on projects involving ML & Generative AI solutions for Technology & Data Management Efficiencies such as optimal cloud computing, knowledge bots, Software Code Assistants, Automatic Data Management etc

 

Responsibilities:

- Collaborate with cross-functional teams to identify opportunities for technology process improvements that can be solved using machine learning and generative AI.

- Define and build innovate ML and Generative AI systems such as AI Assistants for varied SDLC tasks, and improve Data & Infrastructure management etc. 

- Design and develop ML Engineering Solutions, generative AI Applications & Fine-Tuning Large Language Models (LLMs) for above ensuring scalability, efficiency, and maintainability of such solutions.

- Implement prompt engineering techniques to fine-tune and enhance LLMs for better performance and application-specific needs.

- Stay abreast of the latest advancements in the field of Generative AI and actively contribute to the research and development of new ML & Generative AI Solutions.

 

Requirements:

- A Master's or Ph.D. degree in Computer Science, Statistics, Data Science, or a related field.

- Proven experience working as a Software Engineer, with a focus on ML Engineering and exposure to Generative AI Applications such as chatGPT.

- Strong proficiency in programming languages such as Java, Scala, Python, Google Cloud, Biq Query, Hadoop & Spark etc

- Solid knowledge of software engineering best practices, including version control systems (e.g., Git), code reviews, and testing methodologies.

- Familiarity with large language models (LLMs), prompt engineering techniques, vector DB's, embedding & various fine-tuning techniques.

- Strong communication skills to effectively collaborate and present findings to both technical and non-technical stakeholders.

- Proven ability to adapt and learn new technologies and frameworks quickly.

- A proactive mindset with a passion for continuous learning and research in the field of Generative AI.

 

If you are a skilled and innovative Data Scientist with a passion for Generative AI, and have a desire to contribute to technology process improvements, we would love to hear from you. Join our team and help shape the future of our AI Driven Technology Solutions.

Read more
Radisys India

at Radisys India

1 recruiter
Sai Kiran
Posted by Sai Kiran
Bengaluru (Bangalore)
5 - 10 yrs
₹5L - ₹25L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
skill iconMongoDB
+4 more

Radisys Corporation, a global leader in open telecom solutions, enables service providers to drive disruption with new open architecture business models. Our innovative technology solutions leverage open reference architectures and standards, combined with open software and hardware, to power business transformation for the telecom industry. Our services organization delivers systems integration expertise necessary to solve complex deployment challenges for communications and content providers.


Job Overview :


We are looking for a Lead Engineer - Java with a strong background in Java development and hands-on experience with J2EE, Springboot, Kubernetes, Microservices, NoSQL, and SQL. As a Lead Engineer, you will be responsible for designing and developing high-quality software solutions and ensuring the successful delivery of projects. role with 7 to 10 years of experience, based in Bangalore, Karnataka, India. This position is a full-time role with excellent growth opportunities.


Qualifications and Skills :


- Bachelor's or master's degree in Computer Science or a related field


- Strong knowledge of Core Java, J2EE, and Springboot frameworks


- Hands-on experience with Kubernetes and microservices architecture


- Experience with NoSQL and SQL databases


- Proficient in troubleshooting and debugging complex system issues


- Experience in Enterprise Applications


- Excellent communication and leadership skills


- Ability to work in a fast-paced and collaborative environment


- Strong problem-solving and analytical skills


Roles and Responsibilities :


- Work closely with product management and cross-functional teams to define requirements and deliverables


- Design scalable and high-performance applications using Java, J2EE, and Springboot


- Develop and maintain microservices using Kubernetes and containerization


- Design and implement data models using NoSQL and SQL databases


- Ensure the quality and performance of software through code reviews and testing


- Collaborate with stakeholders to identify and resolve technical issues


- Stay up-to-date with the latest industry trends and technologies


Read more
PloPdo
Chandan Nadkarni
Posted by Chandan Nadkarni
Hyderabad
3 - 12 yrs
₹22L - ₹25L / yr
Cassandra
Data modeling

Responsibilities -

  • Collaborate with the development team to understand data requirements and identify potential scalability issues.
  • Design, develop, and implement scalable data pipelines and ETL processes to ingest, process, and analyse large - volumes of data from various sources.
  • Optimize data models and database schemas to improve query performance and reduce latency.
  • Monitor and troubleshoot the performance of our Cassandra database on Azure Cosmos DB, identifying bottlenecks and implementing optimizations as needed.
  • Work with cross-functional teams to ensure data quality, integrity, and security.
  • Stay up to date with emerging technologies and best practices in data engineering and distributed systems.


Qualifications & Requirements -

  • Proven experience as a Data Engineer or similar role, with a focus on designing and optimizing large-scale data systems.
  • Strong proficiency in working with NoSQL databases, particularly Cassandra.
  • Experience with cloud-based data platforms, preferably Azure Cosmos DB.
  • Solid understanding of Distributed Systems, Data modelling, Data Warehouse Designing, and ETL Processes.
  • Detailed understanding of Software Development Life Cycle (SDLC) is required.
  • Good to have knowledge on any visualization tool like Power BI, Tableau.
  • Good to have knowledge on SAP landscape (SAP ECC, SLT, BW, HANA etc).
  • Good to have experience on Data Migration Project.
  • Knowledge of Supply Chain domain would be a plus.
  • Familiarity with software architecture (data structures, data schemas, etc.)
  • Familiarity with Python programming language is a plus.
  • The ability to work in a dynamic, fast-paced, work environment.
  • A passion for data and information with strong analytical, problem solving, and organizational skills.
  • Self-motivated with the ability to work under minimal direction.
  • Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.


Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Hyderabad
7 - 12 yrs
₹15L - ₹15L / yr
Engineering Management
skill iconJava
skill iconNodeJS (Node.js)
Microservices
Big Data
+4 more

Role & Responsibilities

  1. Create innovative architectures based on business requirements.
  2. Design and develop cloud-based solutions for global enterprises.
  3. Coach and nurture engineering teams through feedback, design reviews, and best practice input.
  4. Lead cross-team projects, ensuring resolution of technical blockers.
  5. Collaborate with internal engineering teams, global technology firms, and the open-source community.
  6. Lead initiatives to learn and apply modern and advanced technologies.
  7. Oversee the launch of innovative products in high-volume production environments.
  8. Develop and maintain high-quality software applications using JS frameworks (React, NPM, Node.js etc.,).
  9. Utilize design patterns for backend technologies and ensure strong coding skills.
  10. Deploy and manage applications on AWS cloud services, including ECS (Fargate), Lambda, and load balancers. Work with Docker to containerize services.
  11. Implement and follow CI/CD practices using GitLab for automated build, test, and deployment processes.
  12. Collaborate with cross-functional teams to design technical solutions, ensuring adherence to Microservice Design patterns and Architecture.
  13. Apply expertise in Authentication & Authorization protocols (e.g., JWT, OAuth), including certificate handling, to ensure robust application security.
  14. Utilize databases such as Postgres, MySQL, Mongo and DynamoDB for efficient data storage and retrieval.
  15. Demonstrate familiarity with Big Data technologies, including but not limited to:

- Apache Kafka for distributed event streaming.

- Apache Spark for large-scale data processing.

- Containers for scalable and portable deployments.


Technical Skills:

  1. 7+ years of hands-on development experience with JS frameworks, specifically MERN.
  2. Strong coding skills in backend technologies using various design patterns.
  3. Strong UI development skills using React.
  4. Expert in containerization using Docker.
  5. Knowledge of cloud platforms, specifically OCI, and familiarity with serverless technology, services like ECS, Lambda, and load balancers.
  6. Proficiency in CI/CD practices using GitLab or Bamboo.
  7. Strong knowledge of Microservice Design patterns and Architecture.
  8. Expertise in Authentication and authorization protocols like JWT, and OAuth including certificate handling.
  9. Experience working with high stream media data.
  10. Experience working with databases such as Postgres, MySQL, and DynamoDB.
  11. Familiarity with Big Data technologies related to Kafka, PySpark, Apache Spark, Containers, etc.
  12. Experience with container Orchestration tools like Kubernetes.


Read more
Acelucid Technologies Pvt Ltd
Shivani Tyagi
Posted by Shivani Tyagi
Tumkur, Dehradun
5 - 12 yrs
₹15L - ₹30L / yr
SQL server
SQL Query Analyzer
SQL Azure
Big Data
Database migration

Bachelor’s Degree in Information Technology or related field desirable.

• 5 years of Database administrator experience in Microsoft technologies

• Experience with Azure SQL in a multi-region configuration

• Azure certifications (Good to have)

• 2+ Years’ Experience in performing data migrations upgrades/modernizations, performance tuning on IaaS and PaaS Managed Instance and SQL Azure

• Experience with routine maintenance, recovery, and handling failover of a databases

Knowledge about the RDBMS e.g., Microsoft SQL Server or Azure cloud platform.

• Expertise Microsoft SQL Server on VM, Azure SQL Managed Instance, Azure SQL

• Experience in setting up and working with Azure data warehouse.



Read more
xyz

xyz

Agency job
via HR BIZ HUB by Pooja shankla
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹15L / yr
skill iconJava
Big Data
Apache Hive
Hadoop
Spark

Job Title Big Data Developer

Job Description

Bachelor's degree in Engineering or Computer Science or equivalent OR Master's in Computer Applications or equivalent.

Solid Experience of software development experience and leading teams of engineers and scrum teams.

4+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).

Solid Datawarehousing concepts.

Knowledge of Financial reporting ecosystem will be a plus.

4+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be an addon.

Expert on Distributed ecosystem.

Hands-on experience with programming using Core Java or Python/Scala

Expert on Hadoop and Spark Architecture and its working principle

Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data.

Experience in UNIX shell scripting.

Roles & Responsibilities

Ability to design and develop optimized Data pipelines for batch and real time data processing

Should have experience in analysis, design, development, testing, and implementation of system applications

Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows.

Excellent technical and analytical aptitude

Good communication skills.

Excellent Project management skills.

Results driven Approach.

Mandatory SkillsBig Data, PySpark, Hive

Read more
This opening is with an MNC

This opening is with an MNC

Agency job
via LK Consultants by Namita Agate
Mumbai, Malad, andheri
8 - 13 yrs
₹13L - ₹22L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+8 more

Minimum of 8 years of experience of which, 4 years should be of applied data mining

experience in disciplines such as Call Centre Metrics.

 Strong experience in advanced statistics and analytics including segmentation, modelling, regression, forecasting etc.

 Experience with leading and managing large teams.

 Demonstrated pattern of success in using advanced quantitative analytic methods to solve business problems.

 Demonstrated experience with Business Intelligence/Data Mining tools to work with

data, investigate anomalies, construct data sets, and build models.

 Critical to share details on projects undertaken (preferably on telecom industry)

specifically through analysis from CRM.

Read more
Aprajita Consultancy

Aprajita Consultancy

Agency job
via Squarcell Resource India Pvt by Pranjali Reddy
Hyderabad
8 - 10 yrs
₹13L - ₹15L / yr
SQL server
Oracle
Cassandra
Terraform
Shell Scripting
+3 more

Role: Oracle DBA Developer


Location: Hyderabad


Required Experience: 8 + Years


Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration


cid:[email protected]


Roles and Responsibilities:


 


1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra


2. DBA experience in a SRE environment will be an advantage.


3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)


4. Analyze solutions and implement best practices for cloud database and their components.


5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.


7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.


8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)


9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.


10. Recommend query and schema changes to optimize the performance of database queries.


11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.


12. Have experience with cloud database such as SQL server, Oracle, Cassandra


13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)


14. Have excellent written and verbal English communication skills.


15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.


16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.


17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.


18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.


19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.


20. Bachelor's Degree in a technical discipline required.


21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)


21. Cloud, DevOps certifications will be an advantage.


 


Must have Skills:


 


Ø Oracle DBA with development


Ø SQL


Ø Devops tools


Ø Cassandra






Read more
6sense

at 6sense

15 recruiters
Romesh Rawat
Posted by Romesh Rawat
Remote only
9 - 15 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

About Us:

6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do

sales and marketing. It works with big data at scale, advanced machine learning and

predictive modelling to find buyers and predict what they will purchase, when and

how much.

6sense helps B2B marketing and sales organizations fully understand the complex ABM

buyer journey. By combining intent signals from every channel with the industry’s most

advanced AI predictive capabilities, it is finally possible to predict account demand and

optimize demand generation in an ABM world. Equipped with the power of AI and the

6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,

and engage buyers to drive more revenue.

6sense is seeking a Staff Software Engineer and data to become part of a team

designing, developing, and deploying its customer-centric applications.

We’ve more than doubled our revenue in the past five years and completed our Series

E funding of $200M last year, giving us a stable foundation for growth.


Responsibilities:

1. Own critical datasets and data pipelines for product & business, and work

towards direct business goals of increased data coverage, data match rates, data

quality, data freshness

2. Create more value from various datasets with creative solutions, and unlocking

more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble

large, complex data sets that meet functional and non-functional business

requirements

4. Improving our current data pipelines i.e. improve their performance, SLAs,

remove redundancies, and figure out a way to test before v/s after roll out

5. Identify, design, and implement process improvements in data flow across

multiple stages and via collaboration with multiple cross functional teams eg.

automating manual processes, optimising data delivery, hand-off processes etc.

6. Work with cross function stakeholders including the Product, Data Analytics ,

Customer Support teams for their enablement for data access and related goals

7. Build for security, privacy, scalability, reliability and compliance

8. Mentor and coach other team members on scalable and extensible solutions

design, and best coding standards

9. Help build a team and cultivate innovation by driving cross-collaboration and

execution of projects across multiple teams

Requirements:

 8-10+ years of overall work experience as a Data Engineer

 Excellent analytical and problem-solving skills

 Strong experience with Big Data technologies like Apache Spark. Experience with

Hadoop, Hive, Presto would-be a plus

 Strong experience in writing complex, optimized SQL queries across large data

sets. Experience with optimizing queries and underlying storage

 Experience with Python/ Scala

 Experience with Apache Airflow or other orchestration tools

 Experience with writing Hive / Presto UDFs in Java

 Experience working on AWS cloud platform and services.

 Experience with Key Value stores or NoSQL databases would be a plus.

 Comfortable with Unix / Linux command line

Interpersonal Skills:

 You can work independently as well as part of a team.

 You take ownership of projects and drive them to conclusion.

 You’re a good communicator and are capable of not just doing the work, but also

teaching others and explaining the “why” behind complicated technical

decisions.

 You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll

want you to evolve with it

Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
cambodia
2 - 5 yrs
₹3L - ₹6L / yr
Search Engine Optimization (SEO)
MS-Outlook
Copy Writing
Content Writing
Artificial Intelligence (AI)
+6 more

● Work in a Business Process Outsourcing (BPO) Module providing marketing solutions to different local and international clients and Business Development Units.

● Content Production: Creating content to support demand generation initiatives, and grow brand awareness in a competitive category of an online casino company.

● Writing content for different marketing channels (such as website, blogs, thought leadership pieces, social media, podcasts, webinar, etc.) as assigned to effectively reach the desired target players and marketing goals.

● Data Analysis: Analyze player data to identify trends, improve the player experience, and make data-driven decisions for the casino's operations.

● Research and use AI-based tools to improve and speed up content creation processes.

● Researching content and consumer trends to ensure that content is relevant and appealing.

● Help develop and participate in market research for the purposes of thought leadership content production and opportunities, and competitive intelligence for content marketing.

● Security: Maintain a secure online environment, including protecting player data and preventing cyberattacks.

● Managing content calendars (and supporting calendar management) and ensuring the content you write is consistent with brand standards and meets the brief as-assigned.

● Coordinating with project manager / content manager to ensure the timely delivery of assignments.

● Keeping up to date with content trends, consumer preferences, and advancements in technology.

● Reporting: Generate regular reports on key performance indicators, financial metrics, and operational data to assess the casino's performance.


The specific responsibilities and requirements for a Marketing Content Supervisor/Manager in an online casino may vary depending on the size and nature of the casino, as well as local regulations and industry standards. 


Salary

Php80, 000 - Php100, 000


INR 117,587.- 146,960


Work Experience Requirements


Essential Qualifications


● Excellent research, writing, editing, proofreading, content creation and communication skills.

● Proficiency/experience in formulating corporate/brand/product messaging.

● Strong understanding of SEO and content practices.

● Proficiency in MS Office, Zoom, Slack, marketing platforms related to creative content creation/ project management/ workflow.

● Content writing / copywriting portfolio demonstrating scope of content/copy writing capabilities and application of writing and SEO best practices.

● Highly motivated, self-starter, able to prioritize projects, accept responsibility and follow through without close supervision on every step.

● Demonstrated strong analytical skills with an action-oriented mindset focused on data-driven results.

● Experience in AI-based content creation tools is a plus. Openness to research and use AI tools required.

● Passion for learning and self-improvement.

● Detail-oriented team player with a positive attitude.

● Ability to embrace change and love working in dynamic, growing environments.

● Experience with research, content production, writing on-brand and turning thought pieces into multiple content assets by simplifying complex concepts preferred.

● Ability to keep abreast of content trends and advancements in content strategies and technologies.

● On-camera or on-mic experience or desire to speak and present preferred.

● Must be willing to report onsite in Cambodia

Read more
Staffbee Solutions INC
Remote only
6 - 10 yrs
₹1L - ₹1.5L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+11 more

Looking for freelance?

We are seeking a freelance Data Engineer with 7+ years of experience

 

Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems

 

What we are looking for

We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes

 

  • The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
  • Strong in scripting languages like Python, Scala
  • 5+ years of hands-on experience with one or more of these data integration/ETL tools.
  • Experience building on-prem data warehousing solutions.
  • Experience with designing and developing ETLs, Data Marts, Star Schema
  • Designing a data warehouse solution using Synapse or Azure SQL DB
  • Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
  • Understanding of integration run times available in Azure.
  • Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database


Read more
Impetus technologies

Impetus technologies

Agency job
via HR BIZ HUB by Pooja shankla
any where in india
10 - 12 yrs
₹3L - ₹15L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+11 more

Experience:


Should have a minimum of 10-12 years of Experience.

Should have experience on Product Development/Maintenance/Production Support experience in a support organization

Should have a good understanding of services business for fortune 1000 from the operations point of view

Ability to read, understand and communicate complex technical information

Ability to express ideas in an organized, articulate and concise manner

Ability to face stressful situation with positive attitude

Any certification in regards to support services will be an added advantage

 


Education: BE, B- Tech (CS), MCA

Location: India

Primary Skills:

 

Hands on experience with OpenStack framework. Ability to set up private cloud using OpenStack environment. Awareness to various OpenStack services and modules

Strong experience with OpenStack services like Neutron, Cinder, Keystone, etc.

Proficiency in programming languages such as Python, Ruby, or Go.

Strong knowledge of Linux systems administration and networking.

Familiarity with virtualization technologies like KVM or VMware.

Experience with configuration management and IaC tools like Ansible, Terraform.

Subject matter expertise in OpenStack security

Solid experience with Linux and shell scripting

Sound knowledge of cloud computing concepts & technologies, such as docker, Kubernetes, AWS, GCP, Azure etc.

Ability to configure OpenStack environment for optimum resources

Good knowledge of security, operations in open stack environment

Strong knowledge of Linux internals, networking, storage, security

Strong knowledge of VMware Enterprise products (ESX, vCenter)

Hands on experience with HEAT orchestration

Experience with CI/CD, monitoring, operational aspects

Strong experience working with Rest API's, JSON

Exposure to Big data technologies ( Messaging queues, Hadoop/MPP, NoSQL databases)

Hands on experience with open source monitoring tools like Grafana/Prometheus/Nagios/Ganglia/Zabbix etc.

Strong verbal and written communication skills are mandatory

Excellent analytical and problem solving skills are mandatory

 

Role & Responsibilities


Advise customers and colleagues on cloud and virtualization topics

Work with the architecture team on cloud design projects using openstack

Collaborate with product, customer success, and presales on customer projects

Participate in onsite assessments and workshops when requested 

Provide subject matter expertise and mentor colleagues

Set up open stack environments for projects

Design, deploy, and maintain OpenStack infrastructure.

Collaborate with cross-functional chapters to integrate OpenStack with other services (k8s, DBaaS)

Develop automation scripts and tools to streamline OpenStack operations.

Troubleshoot and resolve issues related to OpenStack services.

Monitor and optimize the performance and scalability of OpenStack components.

Stay updated with the latest OpenStack releases and contribute to the OpenStack community.

Work closely with Architects and Product Management to understand requirement

should be capable of working independently & responsible for end-to-end implementation

Should work with complete ownership and handle all issues without missing SLA's

Work closely with engineering team and support team

Should be able to debug the issues and report appropriately in the ticketing system

Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions

Should be able to debug/create scripts for automation

Should be able to configure monitoring utilities & set up alerts

Should be hands on in setting up OS, applications, databases and have passion to learn new technologies

Should be able to scan logs, errors, exception and get to the root cause of the issue

Contribute in developing a knowledge base on collaboration with other team members

Maintain customer loyalty through Integrity and accountability

Groom and mentor team members on project technologies and work

Read more
EMAlpha
Sash Sarangi
Posted by Sash Sarangi
Remote only
2 - 5 yrs
₹6L - ₹12L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+19 more

Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:

  • Bachelor’s or Master’s degree in Computer Science or equivalent industry technical skills
  • Mandatory In-depth knowledge and strong experience in Python programming language.
  • Expertise and significant work experience in Python with Fastapi and Async frameworks. 
  • Prior experience building Microservice and/or modular monolith.
  • Should be an expert Object Oriented Programming and Design Patterns.
  • Has knowledge and experience with SQLAlchemy/ORM, Celery, Flower, etc.
  • Has knowledge and experience with Kafka / RabbitMQ, Redis.
  • Experience in Postgres/ Cockroachdb.
  • Experience in MongoDB/DynamoDB and/or Cassandra are added advantages.
  • Strong experience in either AWS services (e.g, EC2, ECS, Lambda, StepFunction, S3, SQS, Cognito). and/or equivalent Azure services preferred.
  • Experience working with Docker required.
  • Experience in socket.io added advantage
  • Experience with CI/CD e.g. git actions preferred. 
  • Experience in version control tools Git etc.


This is one of the early positions for scaling up the Technology team. So culture-fit is really important.

  • The role will require serious commitment, and someone with a similar mindset with the team would be a good fit. It's going to be a tremendous growth opportunity. There will be challenging tasks. A lot of these tasks would involve working closely with our AI & Data Science Team.
  • We are looking for someone who has considerable expertise and experience on a low latency highly scaled backend / fullstack engineering stack. The role is ideal for someone who's willing to take such challenges.
  • Coding Expectation – 70-80% of time.
  • Has worked with enterprise solution company / client or, worked with growth/scaled startup earlier.
  • Skills to work effectively in a distributed and remote team environment.
Read more
Quadratic Insights
Praveen Kondaveeti
Posted by Praveen Kondaveeti
Hyderabad
7 - 10 yrs
₹15L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

About Quadratyx:

We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.

We firmly believe in Excellence Everywhere.


Job Description

Purpose of the Job/ Role:

• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.


Key Requisites:

• Expertise in Data structures and algorithms.

• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.

• Scaling of cloud-based infrastructure.

• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.

• Led and mentored a team of data engineers.

• Hands-on experience in test-driven development (TDD).

• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.

• Good knowledge of Kafka and Spark Streaming internal architecture.

• Good knowledge of any Application Servers.

• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.

• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc. 


Skills/ Competencies Required

Technical Skills

• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.

• Clear end-to-end experience in designing, programming, and implementing large software systems.

• Passion and analytical abilities to solve complex problems Soft Skills.

• Always speaking your mind freely.

• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.

• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.


Academic Qualifications & Experience Required

Required Educational Qualification & Relevant Experience

• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.

• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Chennai
4 - 7 yrs
₹13L - ₹15L / yr
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+10 more

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA

Responsibilities:

  • Parse data using Python, create dashboards in Tableau.
  • Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
  • Migrate Datastage jobs to Snowflake, optimize performance.
  • Work with HDFS, Hive, Kafka, and basic Spark.
  • Develop Python scripts for data parsing, quality checks, and visualization.
  • Conduct unit testing and web application testing.
  • Implement Apache Airflow and handle production migration.
  • Apply data warehousing techniques for data cleansing and dimension modeling.

Requirements:

  • 4+ years of experience as a Platform Engineer.
  • Strong Python skills, knowledge of Tableau.
  • Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
  • Proficient in Unix Shell Scripting and SQL.
  • Familiarity with ETL tools like DataStage and DMExpress.
  • Understanding of Apache Airflow.
  • Strong problem-solving and communication skills.

Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
skill iconJava
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
Product Base Company into Logistic

Product Base Company into Logistic

Agency job
via Qrata by Rayal Rajan
Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
skill iconPython
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
Gipfel & Schnell Consultings Pvt Ltd
Aravind Kumar
Posted by Aravind Kumar
Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
Software Testing (QA)
Test Automation (QA)
Appium
Selenium
skill iconJava
+11 more

Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.

• 4+ years of work experience in test planning and automation of enterprise software

• Expertise in programming using Java or Python and other scripting languages.

• Experience with one or more public clouds is expected.

• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins

. • Experience with performance and scalability testing tools.

• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.

Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS

Read more
Vithamas Technologies Pvt LTD
Mysore
4 - 6 yrs
₹10L - ₹20L / yr
Data modeling
ETL
Oracle
MS SQLServer
skill iconMongoDB
+4 more

RequiredSkills:


• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.


• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.

Read more
Ajargh Kreation
Koramangala
3 - 6 yrs
₹6L - ₹10L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more

KEY RESPONSIBILITIES

  • Building a website based on the given requirements and ensure it’s successfully deployed
  • Responsible for designing, planning, and testing new web pages and site features
  • A propensity for brainstorming and coming up with solutions to open-ended problems
  • Work closely with other teams, and project managers, to understand all stakeholders’ requirements and ensure that all specifications and requirements are met in final development
  • Troubleshoot and solve problems related to website functionality
  • Takes ownership of initiatives and drives them to completion.
  • Desire to learn and dive deep into new technologies on the job, especially around modern data storage and streaming open source systems
  • Responsible for creating, optimizing, and managing REST APIs
  • Create website content and enhance website usability and visibility
  • Ensure cross-browser compatibility and testing for mobile responsiveness
  • Ability to integrate payment processing and search functionality software solutions
  • Stay up-to-date with technological advancements and the latest coding practices
  • Collaborate with the team of designers, content managers, and developers to determine site goals, functionality, and layout
  • Monitor website traffic and overall system’s health with Google analytics to ensure high GTmetrix score
  • Build the front-end of applications through appealing visual design
  • Design client-side and server-side architecture
  • Develop server-side logic and APIs that integrate with front-end applications.
  • Architect and design complex database structures and data models.
  • Develop and implement backend systems to support scalable and high-performance web applications.
  • Create automated tests to ensure system stability and performance.
  • Ensure security and data privacy measures are maintained throughout the development process.
  • Maintain an up-to-date changelog for all new, updated, and fixed changes.
  • Ability to document and manage all the software design, requirements, reusable & transferable code, and other technical aspects of the project.
  • Create and convert storyboards and wireframes into high-quality full-stack code
  • Write, execute, and maintain clean, reusable, and scalable code
  • Design and implement low-latency, high-availability, and performant applications
  • Implement security and data protection
  • Ensure code that is platform and device-agnostic

EDUCATION & SKILLS REQUIREMENT

  • B.Tech. / BE / MS degree in Computer Science or Information Technology
  • Expertise in MERN stack (MongoDB, Express.js, React.js, Node.js)
  • Should have prior working experience of at least 3 years as web developer or full stack developer
  • Should have done projects in e-commerce or have preferably worked with companies operating in e-commerce
  • Should have expert-level knowledge in implementing frontend technologies
  • Should have worked in creating backend and have deep understanding of frameworks
  • Experience in the complete product development life cycle
  • Hands-on experience with JavaScript, HTML, CSS, JQuery, JSON, PHP, XML
  • Proficiency in databases, including analytical (e.g., mySQL, MongoDB, PostgreSQL, DynamoDB, Redis, Hive, Elastic etc.)
  • Knowledge of architecting or implementing search APIs
  • Great understanding of data modeling and RESTful APIs
  • Strong knowledge of CS fundamentals, data structures, algorithms, and design patterns
  • Strong analytical, consultative, and communication skills
  • Excellent understanding of Microsoft office tools : excel, word, powerpoint etc.
  • Excellent organizational and time management skills
  • Experience with responsive and adaptive design (Web, Mobile & App)
  • Should be a self starter and have ability to work without being supervised
  • Excellent debugging and optimization skills
  • Experience building high throughput/low latency systems.
  • Knowledge of big data systems such as Cassandra, Elastic, Kafka, Kubernetes, and Docker
  • Should be willing to be a part of a small team and working in fast-paced environment
  • Should be highly passionate about building products that create a significant impact.
  • Should have experience in user experience design, website optimization techniques and different PIM tools


Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Gurugram
4 - 7 yrs
₹10L - ₹15L / yr
skill iconNodeJS (Node.js)
skill iconMongoDB
Mongoose
skill iconExpress
Microservices
+12 more

Job description

  • Engage with the business team and stakeholder at different levels to understand business needs, analyze, document, prioritize the requirements, and make recommendations on the solution and implementation.
  • Delivering the product that meets business requirements, reliability, scalability, and performance goals
  • Work with Agile scrum team and create the scrum team strategy roadmap/backlog, develop minimal viable product and Agile user stories that drive a highly effective and efficient project development and delivery scrum team.
  • Work on Data mapping/transformation, solution design, process diagram, acceptance criteria, user acceptance testing and other project artifacts.
  • Work effectively with the technical/development team and help them understand the specifications/requirements for technical development, testing and implementation.
  • Ensure solutions promote simplicity, efficiency, and conform to enterprise and architecture standards and guidelines.
  • Partner with the support organization to provide training, support and technical assistance to operation team and end users as necessary
  • Product/Application Developer
  • Designs and develops software applications based on user requirements in a variety of coding environments such as graphical user interface, database query languages, report writers, and specific development languages
  • Consult on the use and implementation of software products and applications and specialize in the business development environment, including the selection of development tools and methodology

Primary / Mandatory skills:


  • Overall Experience: Overall 4 to 6 years of IT development experience
  • Design and Code NodeJS based Microservices, API Webservices, NoSql technologies (Cassandra/MongoDb)
  • Expert in developing code for Node-JS based Microservice in TypeScript
  • Good Experience in understanding the data Transmission through pug/sub mechanism like Event Hub and Kafka
  • Good Understanding of Analytics and clickstream data capture is HUGE Plus
  • Good understanding of frameworks like Java Spring Boot, Python is preferred
  • Good understanding of Microsoft Azure principles and services is preferred
  • Able to write Unit test cases
  • Familiarity with performance testing tools such as Akamai SOASTA is preferred
  • Good knowledge on Source Code control like GIT, code clout, etc and understanding of CI/CD(Jenkins and Kubernetes)
  • Solid technical background with understanding and/or experience in software development and web technologies
  • Strong analytical skills and the ability to convert consumer insights and performance data into high impact initiatives
  • Experience working within scaled agile development team
  • Excellent written and verbal communication skills with demonstrated ability to present complex technical information in a clear manner to peers, developers, and senior leaders
  • The desire to be continually learning about emerging technologies/industry trends


Read more
StashAway
Joshua YAP
Posted by Joshua YAP
Remote only
3 - 6 yrs
S$3K - S$9K / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
EKS
+3 more

We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:


  • Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
  • Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
  • Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
  • Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
  • Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork


Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).



Read more
Kloud9 Technologies
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon EMR
EMR
Spark
PySpark
+9 more

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


What we are looking for:

● 3+ years’ experience developing Data & Analytic solutions

● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark

● Experience with relational SQL

● Experience with scripting languages such as Shell, Python

● Experience with source control tools such as GitHub and related dev process

● Experience with workflow scheduling tools such as Airflow

● In-depth knowledge of scalable cloud

● Has a passion for data solutions

● Strong understanding of data structures and algorithms

● Strong understanding of solution and technical design

● Has a strong problem-solving and analytical mindset

● Experience working with Agile Teams.

● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

● Able to quickly pick up new programming languages, technologies, and frameworks

● Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more
LiftOff Software India

at LiftOff Software India

2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more
Virtusa

at Virtusa

2 recruiters
Priyanka Sathiyamoorthi
Posted by Priyanka Sathiyamoorthi
Chennai
11 - 15 yrs
₹15L - ₹33L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

We are looking for a Big Data Engineer with java for Chennai Location

Location : Chennai 

Exp : 11 to 15 Years 



Job description

Required Skill:

1. Candidate should have minimum 7 years of experience as total

2. Candidate should have minimum 4 years of experience in Big Data design and development

3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python 

4. Candidate should have experience in any RDBMS.

Roles & Responsibility:

1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.

2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.

3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.

4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation


Regards,

Priyanka S

7P8R9I9Y4A0N8K8A7S7

Read more
codersbrain

at codersbrain

1 recruiter
Aishwarya Hire
Posted by Aishwarya Hire
Bengaluru (Bangalore)
4 - 6 yrs
₹8L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
  • Design the architecture of our big data platform
  • Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
  • Design and implement data stores that support the scalable processing and storage of our high-frequency data
  • Maintain our data pipeline
  • Customize and oversee integration tools, warehouses, databases, and analytical systems
  • Configure and provide availability for data-access tools used by all data scientists


Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Pune
0 - 1 yrs
₹10L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
SQL
+6 more
1. Work closely with senior engineers to design, implement and deploy applications that impact the business with an emphasis on mobile, payments, and product website development
2. Design software and make technology choices across the stack (from data storage to application to front-end)
3. Understand a range of tier-1 systems/services that power our product to make scalable changes to critical path code
4. Own the design and delivery of an integral piece of a tier-1 system or application
5. Work closely with product managers, UX designers, and end users and integrate software components into a fully functional system
6. Work on the management and execution of project plans and delivery commitments
7. Take ownership of product/feature end-to-end for all phases from the development to the production
8. Ensure the developed features are scalable and highly available with no quality concerns
9. Work closely with senior engineers for refining and implementation
10. Manage and execute project plans and delivery commitments
11. Create and execute appropriate quality plans, project plans, test strategies, and processes for development activities in concert with business and project management efforts
Read more
Concentric AI

at Concentric AI

7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
2 - 10 yrs
₹2L - ₹50L / yr
Software Testing (QA)
Test Automation (QA)
skill iconPython
skill iconJenkins
Automation
+9 more
•3-10  years of experience in test automation for distributed scalable software
• Good QA engineering background with proven automation skills
• Able to understand, design and define approach for automation (Backend/UI/service)
• Design and develop automation scripts for QA testing and tools for quality measurements
• Good to have knowledge of Microservices, API, Web services testing
• Strong in Cloud Engineering skillsets (performance, response time, horizontal scale testing)
• Expertise using automation tools/frameworks (Pytest, Jenkins, Robot, etc)
• Expert at one of the scripting languages – Python, shell, etc
• High level system admin skills to configure and manage test environments
• Basics of Kubernetes and databases like Cassandra, Elasticsearch, MongoDB, etc
• Must have worked in agile environment with CI/CD knowledge
• Having security testing background is a plus
Read more
Concentric AI

at Concentric AI

7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
3 - 10 yrs
₹4L - ₹50L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconPython
skill iconJenkins
+9 more
• 3-10 yrs of industry experience
• Energetic self-starter, fast learner, with a desire to work in a startup environment
• Experience working with Public Clouds like AWS
• Operating and Monitoring cloud infrastructure on AWS
• Primary focus on building, implementing and managing operational support
• Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure
• Expert at one of the scripting languages – Python, shell, etc
• Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
• Handling load monitoring, capacity planning, services monitoring
• Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues
• Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more
Multinational Company providing energy & Automation digital

Multinational Company providing energy & Automation digital

Agency job
via Jobdost by Sathish Kumar
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Multinational Company providing energy & Automation digital

Multinational Company providing energy & Automation digital

Agency job
via Jobdost by Sathish Kumar
Hyderabad
7 - 12 yrs
₹12L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Skills

Proficient experience of minimum 7 years into Hadoop. Hands-on experience of minimum 2 years into AWS - EMR/ S3 and other AWS services and dashboards. Good experience of minimum 2 years into Spark framework. Good understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin. Responsible for troubleshooting and recommendation for Spark and MR jobs. Should be able to use existing logs to debug the issue. Responsible for implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting Triage production issues when they occur with other operational teams. Hands on experience to troubleshoot incidents, formulate theories and test hypothesis and narrow down possibilities to find the root cause.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort