Cutshort logo

50+ Big data Jobs in India

Apply to 50+ Big data Jobs on CutShort.io. Find your next job, effortlessly. Browse Big data Jobs and apply today!

icon
Softway

at Softway

1 recruiter
Neethi Gnanakan
Posted by Neethi Gnanakan
Remote only
2 - 4 yrs
₹15L - ₹15L / yr
Big Data
Kubernetes
Amazon Redshift
Docker
Apache Aurora
+2 more

Who is Softway?


Softway is a business-to-employee solutions company. We work with businesses to impact the everyday behaviors, mindsets, and attitudes  and help drive valuable changes within.Faster innovation and more inclusion to lead cultural and  digital transformations and make human-focused technology to propel businesses. Our solutions include culture & inclusion services and products, technology experiences, and communications. Ultimately, we believe love is a business strategy.


Who are we looking for ?

Softway is on the lookout for a AWS Cloud Engineer who is passionate about building products that our customers love. You will join a dynamic and fast-paced environment and work with cross-functional teams to design, build and roll-out products that deliver the company’s vision and strategy.

What you should be (More imp than what you should know): 

You think in bigger picture terms and impart that on whole systems rather than a single isolated use case.

You have a passion for imparting best practices to other developers and the organization as a whole.

You can work alongside developers and clients to architect solutions and/or troubleshoot a current solution

You should be able to share your learnings with your team and contribute to a training system that empowers other members of the organization

You would rather invest the time to automate a problem than do the same work again.

You have a passion for learning while making an impact.

A sense of humor and the ability to banter is a MUST


What you should know:

Production level experience with AWS cloud platform and its core services.

Good understanding of the network layer (TCP/UDP, Ports, Firewall, NAT, Subnets, VPC, VPN)

Hands on providing solutions for Redshift and Aurora and other databases.

Experience in release management using CodePipeline and Bitbucket

Big Data solutions on AWS using Firehose, Kinesis, Redshift etc.

Serverless Microservice solution using Lambda, AppSync, ECS etc

Experience on working with Microservices based architectures.

Experience and understanding of hosting and maintaining applications on Docker containers.

Hands on Experience on hosting, maintaining and monitoring highly available, fault tolerant and disaster recovery applications. (Load Balancing, Autoscaling and various deployment strategies)

Strong experience working on Linux based infrastructure.

Incidence management, performing root cause analysis and excellent troubleshooting skills.

Infrastructure automation using AWS CloudFormation Terraform and other tools in the HashiCorp Suite.

Exposure to different phases of software development life cycle.

Experience of working with Service Oriented Architecture.

Production level experience of configuring and maintaining databases ( MongoDB, Postgres, MySQL, etc)

Good to have : 

AWS certifications such as AWS Certified Solutions Architect, AWS Certified Developer, or AWS Certified SysOps Administrator are highly desirable.

Why Softway?

Softway’s cross-functional team structure, varied domains and flat hierarchy offer much exposure and learning from a bird's eye-view. You’d love the interaction with our talented folks. Softway gives you ample learning opportunities, pushing you while having your back. We work hard and party harder !! We encourage you to voice your opinions and yes, we actually listen. We take pride in having an ego-less workforce, it allows us to focus on what we’re good at - bringing solutions to life. In addition to a competitive salary and great work culture, we offer GREAT FLEXIBILITY which includes having 2 Fridays off in a month as part of moving to an action-packed, efficient 4 day week.

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 8 yrs
Best in industry
Python
Django
Go Programming (Golang)
NodeJS (Node.js)
Microservices
+11 more

We're seeking an experienced Backend Software Engineer to join our team.

As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.

This includes APIs, databases, and server-side logic.

Responsibilities

  • Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
  • Write clean, efficient, and well-documented code that adheres to industry standards and best practices
  • Participate in code reviews and contribute to the improvement of the codebase
  • Debug and resolve issues in the existing codebase
  • Develop and execute unit tests to ensure high code quality
  • Work with DevOps engineers to ensure seamless deployment of software changes
  • Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
  • Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
  • Collaborate with cross-functional teams to identify and prioritize project requirements

Requirements

  • At least 2+ years of experience building scalable and reliable backend systems
  • Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
  • Experience with either of the frameworks such as Django, Express, gRPC
  • Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
  • Familiarity with containerization technologies such as Docker and Kubernetes
  • Understanding of software development methodologies such as Agile and Scrum
  • Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
  • Bachelor's/Master's degree in Computer Science or related field
  • Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
  • Good written and verbal communication skills in English


Read more
DataGrokr

at DataGrokr

4 candid answers
5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Upto ₹30L / yr (Varies
)
Data engineering
Python
SQL
ETL
Data Warehouse (DWH)
+12 more

Lightning Job By Cutshort⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered).


About DataGrokr

DataGrokr (https://www.datagrokr.com) is a cloud native technology consulting organization providing the next generation of big data analytics, cloud and enterprise solutions. We solve complex technology problems for our global clients who rely on us for our deep technical knowledge and delivery excellence.

If you are unafraid of technology, believe in your learning ability and are looking to work amongst smart, driven colleagues whom you can look up to and learn from, you might want to check us out. 


About the role

We are looking for a Senior Data Engineer to join our growing engineering team. As a member of the team,

• You will work on enterprise data platforms, architect and implement data lakes both on-prem and in the cloud.

• You will be responsible for evolving technical architecture, design and implementation of data solutions using a variety of big data technologies. You will work extensively on all major public cloud platforms - AWS, Azure and GCP.

• You will work with senior technical architects on our client side to evolve an effective technology architecture and development strategy to implement the solution.

• You will work with extremely talented peers and follow modern engineering practices using agile methodologies.

• You will coach, mentor and lead other engineers and provide guidance to ensure the quality of and consistency of the solution.


Must-have skills and attitudes:

• Passion for data engineering, in-depth knowledge of some of the following technologies – SQL (expert level), Python (expert level), Spark (intermediate level), Big data stack of one of AWS/GCP.

• Hands on experience in data wrangling, data munging and ETL. Should be able to source data from anywhere and transform data to any shape using SQL, Python or Spark.

• Hands on experience working with variable data structures like XML/JSON/AVRO etc

• Ability to create data models and architect data warehouse components

• Experience with Version control (GIT/BIT BUCKET etc)

• Strong understanding of Agile, experience with CI/CD pipelines and processes

• Ability to communicate with technical as well as non-technical audience

• Collaborating with various stakeholders

• Have led scrum teams, participated in Sprint grooming and planning sessions, work / effort sizing and estimation


Desired Skills & Experience:

• At least 5 years of industry experience

• Working knowledge of any of the following - AWS Big Data Stack (S3, Redshift, Athena, Glue, etc.), GCP Big Data Stack (Cloud Storage, Workflow, Dataflow, Cloud Functions, Big Query, Pub Sub, etc.).

• Working knowledge of traditional enterprise data warehouse architectures and migrating them to the Cloud.

• Experience with Data Visualization tool (Tableau / Power BI etc)

• Experience with JIRA / Azure DevOps etc


How will DataGrokr support you in your growth:

• You will be groomed and mentored by senior leaders to take on leadership positions in the company

• You will be actively encouraged to attain certifications, lead technical workshops and conduct meetups to grow your own technology acumen and personal brand

• You will work in an open culture that promotes commitment over compliance, individual responsibility over rules and bringing out the best in everyone.

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
3 - 5 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

About DeepIntent:

DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.


What You’ll Do:

  • Establish formal data practice for the organisation.
  • Build & operate scalable and robust data architectures.
  • Create pipelines for the self-service introduction and usage of new data
  • Implement DataOps practices
  • Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
  • Engineers.
  • Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
  • to deploy and manage.
  • Collaborate with various business stakeholders, software engineers, machine learning
  • engineers, and analysts.

Who You Are:

  • Experience in designing, developing and operating configurable Data pipelines serving high
  • volume and velocity data.
  • Experience working with public clouds like GCP/AWS.
  • Good understanding of software engineering, DataOps, data architecture, Agile and
  • DevOps methodologies.
  • Experience building Data architectures that optimize performance and cost, whether the
  • components are prepackaged or homegrown
  • Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
  • etc. and big data databases like BigQuery, Clickhouse, etc
  • Good communication skills with the ability to collaborate with both technical and non-technical
  • people.
  • Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

 

Read more
PloPdo
Chandan Nadkarni
Posted by Chandan Nadkarni
Hyderabad
3 - 12 yrs
₹22L - ₹25L / yr
Cassandra
Data modeling

Responsibilities -

  • Collaborate with the development team to understand data requirements and identify potential scalability issues.
  • Design, develop, and implement scalable data pipelines and ETL processes to ingest, process, and analyse large - volumes of data from various sources.
  • Optimize data models and database schemas to improve query performance and reduce latency.
  • Monitor and troubleshoot the performance of our Cassandra database on Azure Cosmos DB, identifying bottlenecks and implementing optimizations as needed.
  • Work with cross-functional teams to ensure data quality, integrity, and security.
  • Stay up to date with emerging technologies and best practices in data engineering and distributed systems.


Qualifications & Requirements -

  • Proven experience as a Data Engineer or similar role, with a focus on designing and optimizing large-scale data systems.
  • Strong proficiency in working with NoSQL databases, particularly Cassandra.
  • Experience with cloud-based data platforms, preferably Azure Cosmos DB.
  • Solid understanding of Distributed Systems, Data modelling, Data Warehouse Designing, and ETL Processes.
  • Detailed understanding of Software Development Life Cycle (SDLC) is required.
  • Good to have knowledge on any visualization tool like Power BI, Tableau.
  • Good to have knowledge on SAP landscape (SAP ECC, SLT, BW, HANA etc).
  • Good to have experience on Data Migration Project.
  • Knowledge of Supply Chain domain would be a plus.
  • Familiarity with software architecture (data structures, data schemas, etc.)
  • Familiarity with Python programming language is a plus.
  • The ability to work in a dynamic, fast-paced, work environment.
  • A passion for data and information with strong analytical, problem solving, and organizational skills.
  • Self-motivated with the ability to work under minimal direction.
  • Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.


Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Hyderabad
7 - 12 yrs
₹15L - ₹15L / yr
Engineering Management
Java
NodeJS (Node.js)
Microservices
Big Data
+4 more

Role & Responsibilities

  1. Create innovative architectures based on business requirements.
  2. Design and develop cloud-based solutions for global enterprises.
  3. Coach and nurture engineering teams through feedback, design reviews, and best practice input.
  4. Lead cross-team projects, ensuring resolution of technical blockers.
  5. Collaborate with internal engineering teams, global technology firms, and the open-source community.
  6. Lead initiatives to learn and apply modern and advanced technologies.
  7. Oversee the launch of innovative products in high-volume production environments.
  8. Develop and maintain high-quality software applications using JS frameworks (React, NPM, Node.js etc.,).
  9. Utilize design patterns for backend technologies and ensure strong coding skills.
  10. Deploy and manage applications on AWS cloud services, including ECS (Fargate), Lambda, and load balancers. Work with Docker to containerize services.
  11. Implement and follow CI/CD practices using GitLab for automated build, test, and deployment processes.
  12. Collaborate with cross-functional teams to design technical solutions, ensuring adherence to Microservice Design patterns and Architecture.
  13. Apply expertise in Authentication & Authorization protocols (e.g., JWT, OAuth), including certificate handling, to ensure robust application security.
  14. Utilize databases such as Postgres, MySQL, Mongo and DynamoDB for efficient data storage and retrieval.
  15. Demonstrate familiarity with Big Data technologies, including but not limited to:

- Apache Kafka for distributed event streaming.

- Apache Spark for large-scale data processing.

- Containers for scalable and portable deployments.


Technical Skills:

  1. 7+ years of hands-on development experience with JS frameworks, specifically MERN.
  2. Strong coding skills in backend technologies using various design patterns.
  3. Strong UI development skills using React.
  4. Expert in containerization using Docker.
  5. Knowledge of cloud platforms, specifically OCI, and familiarity with serverless technology, services like ECS, Lambda, and load balancers.
  6. Proficiency in CI/CD practices using GitLab or Bamboo.
  7. Strong knowledge of Microservice Design patterns and Architecture.
  8. Expertise in Authentication and authorization protocols like JWT, and OAuth including certificate handling.
  9. Experience working with high stream media data.
  10. Experience working with databases such as Postgres, MySQL, and DynamoDB.
  11. Familiarity with Big Data technologies related to Kafka, PySpark, Apache Spark, Containers, etc.
  12. Experience with container Orchestration tools like Kubernetes.


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
1 - 3 yrs
Best in industry
MongoDB
Big Data
Apache Kafka
Spring MVC
Spark
+1 more

With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.


What You’ll Do:

  • Ensure timely and top-quality product delivery
  • Ensure that the end product is fully and correctly defined and documented
  • Ensure implementation/continuous improvement of formal processes to support product development activities
  • Drive the architecture/design decisions needed to achieve cost-effective and high-performance results
  • Conduct feasibility analysis, produce functional and design specifications of proposed new features.
  • Provide helpful and productive code reviews for peers and junior members of the team.
  • Troubleshoot complex issues discovered in-house as well as in customer environments.


Who You Are:

  • Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc.
  • Expertise in Java, Object Oriented Programming, Design Patterns
  • Experience in coding and implementing scalable solutions in a large-scale distributed environment
  • Working experience in a Linux/UNIX environment is good to have
  • Experience with relational databases and database concepts, preferably MySQL
  • Experience with SQL and Java optimization for real-time systems
  • Familiarity with version control systems Git and build tools like Maven
  • Excellent interpersonal, written, and verbal communication skills
  • BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent


The set of skills we are looking for:

  • MongoDB
  • Big Data
  • Apache Kafka 
  • Spring MVC 
  • Spark 
  • Java 


DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.

DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.

DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
3 - 8 yrs
Best in industry
MongoDB
Big Data
Apache Kafka
Spring MVC
Spark
+1 more

With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.


What You’ll Do:

  • Ensure timely and top-quality product delivery
  • Ensure that the end product is fully and correctly defined and documented
  • Ensure implementation/continuous improvement of formal processes to support product development activities
  • Drive the architecture/design decisions needed to achieve cost-effective and high-performance results
  • Conduct feasibility analysis, produce functional and design specifications of proposed new features.
  • Provide helpful and productive code reviews for peers and junior members of the team.
  • Troubleshoot complex issues discovered in-house as well as in customer environments.


Who You Are:

  • Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc.
  • Expertise in Java, Object Oriented Programming, Design Patterns
  • Experience in coding and implementing scalable solutions in a large-scale distributed environment
  • Working experience in a Linux/UNIX environment is good to have
  • Experience with relational databases and database concepts, preferably MySQL
  • Experience with SQL and Java optimization for real-time systems
  • Familiarity with version control systems Git and build tools like Maven
  • Excellent interpersonal, written, and verbal communication skills
  • BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent


The set of skills we are looking for:

  • MongoDB
  • Big Data
  • Apache Kafka 
  • Spring MVC 
  • Spark 
  • Java 


DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.

DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.

DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.


Read more
Acelucid Technologies Pvt Ltd
Shivani Tyagi
Posted by Shivani Tyagi
Tumkur, Dehradun
5 - 12 yrs
₹15L - ₹30L / yr
SQL server
SQL Query Analyzer
SQL Azure
Big Data
Database migration

Bachelor’s Degree in Information Technology or related field desirable.

• 5 years of Database administrator experience in Microsoft technologies

• Experience with Azure SQL in a multi-region configuration

• Azure certifications (Good to have)

• 2+ Years’ Experience in performing data migrations upgrades/modernizations, performance tuning on IaaS and PaaS Managed Instance and SQL Azure

• Experience with routine maintenance, recovery, and handling failover of a databases

Knowledge about the RDBMS e.g., Microsoft SQL Server or Azure cloud platform.

• Expertise Microsoft SQL Server on VM, Azure SQL Managed Instance, Azure SQL

• Experience in setting up and working with Azure data warehouse.



Read more
xyz

at xyz

Agency job
via HR BIZ HUB by Pooja shankla
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹15L / yr
Java
Big Data
Apache Hive
Hadoop
Spark

Job Title Big Data Developer

Job Description

Bachelor's degree in Engineering or Computer Science or equivalent OR Master's in Computer Applications or equivalent.

Solid Experience of software development experience and leading teams of engineers and scrum teams.

4+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).

Solid Datawarehousing concepts.

Knowledge of Financial reporting ecosystem will be a plus.

4+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be an addon.

Expert on Distributed ecosystem.

Hands-on experience with programming using Core Java or Python/Scala

Expert on Hadoop and Spark Architecture and its working principle

Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data.

Experience in UNIX shell scripting.

Roles & Responsibilities

Ability to design and develop optimized Data pipelines for batch and real time data processing

Should have experience in analysis, design, development, testing, and implementation of system applications

Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows.

Excellent technical and analytical aptitude

Good communication skills.

Excellent Project management skills.

Results driven Approach.

Mandatory SkillsBig Data, PySpark, Hive

Read more
This opening is with an MNC
Mumbai, Malad, andheri
8 - 13 yrs
₹13L - ₹22L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+8 more

Minimum of 8 years of experience of which, 4 years should be of applied data mining

experience in disciplines such as Call Centre Metrics.

 Strong experience in advanced statistics and analytics including segmentation, modelling, regression, forecasting etc.

 Experience with leading and managing large teams.

 Demonstrated pattern of success in using advanced quantitative analytic methods to solve business problems.

 Demonstrated experience with Business Intelligence/Data Mining tools to work with

data, investigate anomalies, construct data sets, and build models.

 Critical to share details on projects undertaken (preferably on telecom industry)

specifically through analysis from CRM.

Read more
Hyderabad
8 - 10 yrs
₹13L - ₹15L / yr
SQL server
Oracle
Cassandra
Terraform
Shell Scripting
+3 more

Role: Oracle DBA Developer


Location: Hyderabad


Required Experience: 8 + Years


Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration


cid:[email protected]


Roles and Responsibilities:


 


1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra


2. DBA experience in a SRE environment will be an advantage.


3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)


4. Analyze solutions and implement best practices for cloud database and their components.


5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.


7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.


8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)


9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.


10. Recommend query and schema changes to optimize the performance of database queries.


11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.


12. Have experience with cloud database such as SQL server, Oracle, Cassandra


13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)


14. Have excellent written and verbal English communication skills.


15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.


16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.


17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.


18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.


19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.


20. Bachelor's Degree in a technical discipline required.


21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)


21. Cloud, DevOps certifications will be an advantage.


 


Must have Skills:


 


Ø Oracle DBA with development


Ø SQL


Ø Devops tools


Ø Cassandra






Read more
Klevu
Vandana Purohit
Posted by Vandana Purohit
Ahmedabad
4 - 12 yrs
₹10L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Experience: 7+ years


Location: Ahmedabad


About Klevu

Klevu is a discovery experience suite that revolutionises online shopping. It provides a range of SaaS based solutions, that are deep rooted in AI-ML technology and delivered via cloud for online retailers. Klevu is a global Finnish company that prides in diversity, great team, excellent partner network and global customer footprint. Learn more from www.klevu.com


Responsibilities:

  • Build, scale and maintain the Klevu data platform
  • Design and develop solutions for business needs on the Klevu data platform


Must have skills:

  • 5+ years of experience working with big data systems
  • Well versed in Java, Spring, stream processing and batch processing.
  • Experience working in large data pipelines and data lakes / warehouses
  • Strong experience in one of the data processing technologies like Spark, Storm, Flink, etc
  • Experience working with NoSQL and data warehouse technologies like TimescaleDB, Clickhouse, RedShift, BigQuery, etc
  • Product minded engineer with strong problem solving and analytical skills
  • Ability to communicate effectively within and outside the engineering team
  • Familiar with agile software development process


Nice to have:

  • Experience in AWS cloud
  • Familiar with R or python
  • Experience working with microservices architecture
  • Experience working in eCommerce and/or SAAS startups


What we offer:

  • Being part of one of the most exciting search technology company from Nordics with global footprints
  • Work with a team in 3 continents (North America, Europe, Asia)
  • Excellent work culture, hands-on, result oriented
  • Good salary and flexible hours


Read more
6sense

at 6sense

15 recruiters
Romesh Rawat
Posted by Romesh Rawat
Remote only
9 - 15 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

About Us:

6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do

sales and marketing. It works with big data at scale, advanced machine learning and

predictive modelling to find buyers and predict what they will purchase, when and

how much.

6sense helps B2B marketing and sales organizations fully understand the complex ABM

buyer journey. By combining intent signals from every channel with the industry’s most

advanced AI predictive capabilities, it is finally possible to predict account demand and

optimize demand generation in an ABM world. Equipped with the power of AI and the

6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,

and engage buyers to drive more revenue.

6sense is seeking a Staff Software Engineer and data to become part of a team

designing, developing, and deploying its customer-centric applications.

We’ve more than doubled our revenue in the past five years and completed our Series

E funding of $200M last year, giving us a stable foundation for growth.


Responsibilities:

1. Own critical datasets and data pipelines for product & business, and work

towards direct business goals of increased data coverage, data match rates, data

quality, data freshness

2. Create more value from various datasets with creative solutions, and unlocking

more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble

large, complex data sets that meet functional and non-functional business

requirements

4. Improving our current data pipelines i.e. improve their performance, SLAs,

remove redundancies, and figure out a way to test before v/s after roll out

5. Identify, design, and implement process improvements in data flow across

multiple stages and via collaboration with multiple cross functional teams eg.

automating manual processes, optimising data delivery, hand-off processes etc.

6. Work with cross function stakeholders including the Product, Data Analytics ,

Customer Support teams for their enablement for data access and related goals

7. Build for security, privacy, scalability, reliability and compliance

8. Mentor and coach other team members on scalable and extensible solutions

design, and best coding standards

9. Help build a team and cultivate innovation by driving cross-collaboration and

execution of projects across multiple teams

Requirements:

 8-10+ years of overall work experience as a Data Engineer

 Excellent analytical and problem-solving skills

 Strong experience with Big Data technologies like Apache Spark. Experience with

Hadoop, Hive, Presto would-be a plus

 Strong experience in writing complex, optimized SQL queries across large data

sets. Experience with optimizing queries and underlying storage

 Experience with Python/ Scala

 Experience with Apache Airflow or other orchestration tools

 Experience with writing Hive / Presto UDFs in Java

 Experience working on AWS cloud platform and services.

 Experience with Key Value stores or NoSQL databases would be a plus.

 Comfortable with Unix / Linux command line

Interpersonal Skills:

 You can work independently as well as part of a team.

 You take ownership of projects and drive them to conclusion.

 You’re a good communicator and are capable of not just doing the work, but also

teaching others and explaining the “why” behind complicated technical

decisions.

 You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll

want you to evolve with it

Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
cambodia
2 - 5 yrs
₹3L - ₹6L / yr
Search Engine Optimization (SEO)
MS-Outlook
Copy Writing
Content Writing
Artificial Intelligence (AI)
+6 more

● Work in a Business Process Outsourcing (BPO) Module providing marketing solutions to different local and international clients and Business Development Units.

● Content Production: Creating content to support demand generation initiatives, and grow brand awareness in a competitive category of an online casino company.

● Writing content for different marketing channels (such as website, blogs, thought leadership pieces, social media, podcasts, webinar, etc.) as assigned to effectively reach the desired target players and marketing goals.

● Data Analysis: Analyze player data to identify trends, improve the player experience, and make data-driven decisions for the casino's operations.

● Research and use AI-based tools to improve and speed up content creation processes.

● Researching content and consumer trends to ensure that content is relevant and appealing.

● Help develop and participate in market research for the purposes of thought leadership content production and opportunities, and competitive intelligence for content marketing.

● Security: Maintain a secure online environment, including protecting player data and preventing cyberattacks.

● Managing content calendars (and supporting calendar management) and ensuring the content you write is consistent with brand standards and meets the brief as-assigned.

● Coordinating with project manager / content manager to ensure the timely delivery of assignments.

● Keeping up to date with content trends, consumer preferences, and advancements in technology.

● Reporting: Generate regular reports on key performance indicators, financial metrics, and operational data to assess the casino's performance.


The specific responsibilities and requirements for a Marketing Content Supervisor/Manager in an online casino may vary depending on the size and nature of the casino, as well as local regulations and industry standards. 


Salary

Php80, 000 - Php100, 000


INR 117,587.- 146,960


Work Experience Requirements


Essential Qualifications


● Excellent research, writing, editing, proofreading, content creation and communication skills.

● Proficiency/experience in formulating corporate/brand/product messaging.

● Strong understanding of SEO and content practices.

● Proficiency in MS Office, Zoom, Slack, marketing platforms related to creative content creation/ project management/ workflow.

● Content writing / copywriting portfolio demonstrating scope of content/copy writing capabilities and application of writing and SEO best practices.

● Highly motivated, self-starter, able to prioritize projects, accept responsibility and follow through without close supervision on every step.

● Demonstrated strong analytical skills with an action-oriented mindset focused on data-driven results.

● Experience in AI-based content creation tools is a plus. Openness to research and use AI tools required.

● Passion for learning and self-improvement.

● Detail-oriented team player with a positive attitude.

● Ability to embrace change and love working in dynamic, growing environments.

● Experience with research, content production, writing on-brand and turning thought pieces into multiple content assets by simplifying complex concepts preferred.

● Ability to keep abreast of content trends and advancements in content strategies and technologies.

● On-camera or on-mic experience or desire to speak and present preferred.

● Must be willing to report onsite in Cambodia

Read more
Staffbee Solutions INC
Remote only
6 - 10 yrs
₹1L - ₹1.5L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+11 more

Looking for freelance?

We are seeking a freelance Data Engineer with 7+ years of experience

 

Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems

 

What we are looking for

We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes

 

  • The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
  • Strong in scripting languages like Python, Scala
  • 5+ years of hands-on experience with one or more of these data integration/ETL tools.
  • Experience building on-prem data warehousing solutions.
  • Experience with designing and developing ETLs, Data Marts, Star Schema
  • Designing a data warehouse solution using Synapse or Azure SQL DB
  • Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
  • Understanding of integration run times available in Azure.
  • Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database


Read more
Impetus technologies
any where in india
10 - 12 yrs
₹3L - ₹15L / yr
Vue.js
AngularJS (1.x)
Angular (2+)
React.js
Javascript
+11 more

Experience:


Should have a minimum of 10-12 years of Experience.

Should have experience on Product Development/Maintenance/Production Support experience in a support organization

Should have a good understanding of services business for fortune 1000 from the operations point of view

Ability to read, understand and communicate complex technical information

Ability to express ideas in an organized, articulate and concise manner

Ability to face stressful situation with positive attitude

Any certification in regards to support services will be an added advantage

 


Education: BE, B- Tech (CS), MCA

Location: India

Primary Skills:

 

Hands on experience with OpenStack framework. Ability to set up private cloud using OpenStack environment. Awareness to various OpenStack services and modules

Strong experience with OpenStack services like Neutron, Cinder, Keystone, etc.

Proficiency in programming languages such as Python, Ruby, or Go.

Strong knowledge of Linux systems administration and networking.

Familiarity with virtualization technologies like KVM or VMware.

Experience with configuration management and IaC tools like Ansible, Terraform.

Subject matter expertise in OpenStack security

Solid experience with Linux and shell scripting

Sound knowledge of cloud computing concepts & technologies, such as docker, Kubernetes, AWS, GCP, Azure etc.

Ability to configure OpenStack environment for optimum resources

Good knowledge of security, operations in open stack environment

Strong knowledge of Linux internals, networking, storage, security

Strong knowledge of VMware Enterprise products (ESX, vCenter)

Hands on experience with HEAT orchestration

Experience with CI/CD, monitoring, operational aspects

Strong experience working with Rest API's, JSON

Exposure to Big data technologies ( Messaging queues, Hadoop/MPP, NoSQL databases)

Hands on experience with open source monitoring tools like Grafana/Prometheus/Nagios/Ganglia/Zabbix etc.

Strong verbal and written communication skills are mandatory

Excellent analytical and problem solving skills are mandatory

 

Role & Responsibilities


Advise customers and colleagues on cloud and virtualization topics

Work with the architecture team on cloud design projects using openstack

Collaborate with product, customer success, and presales on customer projects

Participate in onsite assessments and workshops when requested 

Provide subject matter expertise and mentor colleagues

Set up open stack environments for projects

Design, deploy, and maintain OpenStack infrastructure.

Collaborate with cross-functional chapters to integrate OpenStack with other services (k8s, DBaaS)

Develop automation scripts and tools to streamline OpenStack operations.

Troubleshoot and resolve issues related to OpenStack services.

Monitor and optimize the performance and scalability of OpenStack components.

Stay updated with the latest OpenStack releases and contribute to the OpenStack community.

Work closely with Architects and Product Management to understand requirement

should be capable of working independently & responsible for end-to-end implementation

Should work with complete ownership and handle all issues without missing SLA's

Work closely with engineering team and support team

Should be able to debug the issues and report appropriately in the ticketing system

Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions

Should be able to debug/create scripts for automation

Should be able to configure monitoring utilities & set up alerts

Should be hands on in setting up OS, applications, databases and have passion to learn new technologies

Should be able to scan logs, errors, exception and get to the root cause of the issue

Contribute in developing a knowledge base on collaboration with other team members

Maintain customer loyalty through Integrity and accountability

Groom and mentor team members on project technologies and work

Read more
EMAlpha
Sash Sarangi
Posted by Sash Sarangi
Remote only
2 - 5 yrs
₹6L - ₹12L / yr
Vue.js
AngularJS (1.x)
Angular (2+)
React.js
Javascript
+19 more

Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:

  • Bachelor’s or Master’s degree in Computer Science or equivalent industry technical skills
  • Mandatory In-depth knowledge and strong experience in Python programming language.
  • Expertise and significant work experience in Python with Fastapi and Async frameworks. 
  • Prior experience building Microservice and/or modular monolith.
  • Should be an expert Object Oriented Programming and Design Patterns.
  • Has knowledge and experience with SQLAlchemy/ORM, Celery, Flower, etc.
  • Has knowledge and experience with Kafka / RabbitMQ, Redis.
  • Experience in Postgres/ Cockroachdb.
  • Experience in MongoDB/DynamoDB and/or Cassandra are added advantages.
  • Strong experience in either AWS services (e.g, EC2, ECS, Lambda, StepFunction, S3, SQS, Cognito). and/or equivalent Azure services preferred.
  • Experience working with Docker required.
  • Experience in socket.io added advantage
  • Experience with CI/CD e.g. git actions preferred. 
  • Experience in version control tools Git etc.


This is one of the early positions for scaling up the Technology team. So culture-fit is really important.

  • The role will require serious commitment, and someone with a similar mindset with the team would be a good fit. It's going to be a tremendous growth opportunity. There will be challenging tasks. A lot of these tasks would involve working closely with our AI & Data Science Team.
  • We are looking for someone who has considerable expertise and experience on a low latency highly scaled backend / fullstack engineering stack. The role is ideal for someone who's willing to take such challenges.
  • Coding Expectation – 70-80% of time.
  • Has worked with enterprise solution company / client or, worked with growth/scaled startup earlier.
  • Skills to work effectively in a distributed and remote team environment.
Read more
Quadratic Insights
Praveen Kondaveeti
Posted by Praveen Kondaveeti
Hyderabad
7 - 10 yrs
₹15L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

About Quadratyx:

We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.

We firmly believe in Excellence Everywhere.


Job Description

Purpose of the Job/ Role:

• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.


Key Requisites:

• Expertise in Data structures and algorithms.

• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.

• Scaling of cloud-based infrastructure.

• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.

• Led and mentored a team of data engineers.

• Hands-on experience in test-driven development (TDD).

• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.

• Good knowledge of Kafka and Spark Streaming internal architecture.

• Good knowledge of any Application Servers.

• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.

• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc. 


Skills/ Competencies Required

Technical Skills

• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.

• Clear end-to-end experience in designing, programming, and implementing large software systems.

• Passion and analytical abilities to solve complex problems Soft Skills.

• Always speaking your mind freely.

• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.

• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.


Academic Qualifications & Experience Required

Required Educational Qualification & Relevant Experience

• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.

• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.

Read more
Hexr Factory Immersive Tech
Fathariya Begam
Posted by Fathariya Begam
Chennai
0 - 2 yrs
₹2L - ₹5L / yr
C++
OpenCV
Computer Vision
Git
Ruby
+3 more

Company Introduction


About Hexr Factory :

We are always exploring the possibilities to bridge the physical and digital worlds. We design and build Metaverse & Digital twin technologies for the future of industry and entertainment.


Job type & Location


Project Role: C++ Developer


Project Role Description :


The primary focus will be the development of all core engines, designing back-end components, integrating data storage, and ensuring high performance and responsiveness to requests from the front end. You will also be responsible for integrating the front-end elements built by your co-workers/Third-Party into the application.


Work Experience: 2 - 8 years


Work location: Chennai


Must-Have Skills: C++, OpenCV, Ruby, Boost C++ libraries, MySQL, MQTT. 


Key Responsibilities:

  • Extensive knowledge of C++ frameworks and libraries that utilize openCCTV, FFMPEG, video processing, and analytics.
  • Multi-threading programming, Distributed and Parallel computing, Big Data technologies, SQL Server programming with T-SQL, and Microsoft data access technologies.
  • Familiar with libraries like OpenCV, FFmpeg, GStreamer, and Directshow.
  • Extensive knowledge of RTSP, RTMP, and HLS video streaming protocols.
  • Candidate should know about release activities, source control, merging, and branching concepts.
  • Ability to analyze and visualize BIG data effectively.
  • Machine learning:- KNN, SVM, Text Search.
  • Familiar with QGIS.
  • A good understanding of Computer Vision and Image Processing concepts and algorithms.
  • Thorough knowledge of the standard library, STL containers, and algorithms.
  • Strong background in object-oriented design, prioritizing test ability and re-usability.
  • Familiarity with embedded systems design and low-level hardware interactions.
  • Proven track record of identifying bottlenecks and bugs, and devising solutions to these problems.
  • Hands-on Algorithm development and implementation.
  • Experience supporting and working with cross-functional teams in a dynamic environment.


Skills Required : 

  • Experience with existing computer vision toolkits such as Open-CV.
  • Current trends within Computer Vision and Image Processing in academia and community.
  • Deep learning using convolution neural networks for object classification, recognition, or sequence modeling.
  • Experience with any of the following: Object detection and target tracking, simultaneous localization and mapping, 3D reconstruction, camera calibration, behavior analysis, automated video surveillance, virtual makeup, and related fields.
  • Proficient understanding of code versioning tools, such as Git.
  • Passionate about new technology and innovation.
  • Understanding the nature of asynchronous programming and its quirks and workarounds.
  • Excellent verbal and written communication skills.



Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Chennai
4 - 7 yrs
₹13L - ₹15L / yr
Data Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+10 more

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA

Responsibilities:

  • Parse data using Python, create dashboards in Tableau.
  • Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
  • Migrate Datastage jobs to Snowflake, optimize performance.
  • Work with HDFS, Hive, Kafka, and basic Spark.
  • Develop Python scripts for data parsing, quality checks, and visualization.
  • Conduct unit testing and web application testing.
  • Implement Apache Airflow and handle production migration.
  • Apply data warehousing techniques for data cleansing and dimension modeling.

Requirements:

  • 4+ years of experience as a Platform Engineer.
  • Strong Python skills, knowledge of Tableau.
  • Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
  • Proficient in Unix Shell Scripting and SQL.
  • Familiarity with ETL tools like DataStage and DMExpress.
  • Understanding of Apache Airflow.
  • Strong problem-solving and communication skills.

Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
Java
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
Shiprocket

at Shiprocket

5 recruiters
Kailuni Lanah
Posted by Kailuni Lanah
Gurugram
4 - 10 yrs
₹25L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

We are seeking an experienced Senior Data Platform Engineer to join our team. The ideal candidate should have extensive experience with Pyspark, Airflow, Presto, Hive, Kafka and Debezium, and should be passionate about developing scalable and reliable data platforms.

Responsibilities:

  • Design, develop, and maintain our data platform architecture using Pyspark, Airflow, Presto, Hive, Kafka, and Debezium.
  • Develop and maintain ETL processes to ingest, transform, and load data from various sources into our data platform.
  • Work closely with data analysts, data scientists, and other stakeholders to understand their requirements and design solutions that meet their needs.
  • Implement and maintain data governance policies and procedures to ensure data quality, privacy, and security.
  • Continuously monitor and optimize the performance of our data platform to ensure scalability, reliability, and cost-effectiveness.
  • Keep up-to-date with the latest trends and technologies in the field of data engineering and share knowledge and best practices with the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or related fields.
  • Strong proficiency in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium.
  • Experience with data warehousing, data modeling, and data governance.
  • Experience working with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure).
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Excellent communication and interpersonal skills.

If you are a self-motivated and driven individual with a passion for data engineering and a strong background in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium, we encourage you to apply for this exciting opportunity. We offer competitive compensation, comprehensive benefits, and a collaborative work environment that fosters innovation and growth.

Read more
Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
Python
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
Gipfel & Schnell Consultings Pvt Ltd
Aravind Kumar
Posted by Aravind Kumar
Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
Software Testing (QA)
Test Automation (QA)
Appium
Selenium
Java
+11 more

Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.

• 4+ years of work experience in test planning and automation of enterprise software

• Expertise in programming using Java or Python and other scripting languages.

• Experience with one or more public clouds is expected.

• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins

. • Experience with performance and scalability testing tools.

• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.

Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS

Read more
Vithamas Technologies Pvt LTD
Mysore
4 - 6 yrs
₹10L - ₹20L / yr
Data modeling
ETL
Oracle
MS SQLServer
MongoDB
+4 more

RequiredSkills:


• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.


• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.

Read more
Ajargh Kreation
Koramangala
3 - 6 yrs
₹6L - ₹10L / yr
AngularJS (1.x)
Angular (2+)
React.js
NodeJS (Node.js)
MongoDB
+7 more

KEY RESPONSIBILITIES

  • Building a website based on the given requirements and ensure it’s successfully deployed
  • Responsible for designing, planning, and testing new web pages and site features
  • A propensity for brainstorming and coming up with solutions to open-ended problems
  • Work closely with other teams, and project managers, to understand all stakeholders’ requirements and ensure that all specifications and requirements are met in final development
  • Troubleshoot and solve problems related to website functionality
  • Takes ownership of initiatives and drives them to completion.
  • Desire to learn and dive deep into new technologies on the job, especially around modern data storage and streaming open source systems
  • Responsible for creating, optimizing, and managing REST APIs
  • Create website content and enhance website usability and visibility
  • Ensure cross-browser compatibility and testing for mobile responsiveness
  • Ability to integrate payment processing and search functionality software solutions
  • Stay up-to-date with technological advancements and the latest coding practices
  • Collaborate with the team of designers, content managers, and developers to determine site goals, functionality, and layout
  • Monitor website traffic and overall system’s health with Google analytics to ensure high GTmetrix score
  • Build the front-end of applications through appealing visual design
  • Design client-side and server-side architecture
  • Develop server-side logic and APIs that integrate with front-end applications.
  • Architect and design complex database structures and data models.
  • Develop and implement backend systems to support scalable and high-performance web applications.
  • Create automated tests to ensure system stability and performance.
  • Ensure security and data privacy measures are maintained throughout the development process.
  • Maintain an up-to-date changelog for all new, updated, and fixed changes.
  • Ability to document and manage all the software design, requirements, reusable & transferable code, and other technical aspects of the project.
  • Create and convert storyboards and wireframes into high-quality full-stack code
  • Write, execute, and maintain clean, reusable, and scalable code
  • Design and implement low-latency, high-availability, and performant applications
  • Implement security and data protection
  • Ensure code that is platform and device-agnostic

EDUCATION & SKILLS REQUIREMENT

  • B.Tech. / BE / MS degree in Computer Science or Information Technology
  • Expertise in MERN stack (MongoDB, Express.js, React.js, Node.js)
  • Should have prior working experience of at least 3 years as web developer or full stack developer
  • Should have done projects in e-commerce or have preferably worked with companies operating in e-commerce
  • Should have expert-level knowledge in implementing frontend technologies
  • Should have worked in creating backend and have deep understanding of frameworks
  • Experience in the complete product development life cycle
  • Hands-on experience with JavaScript, HTML, CSS, JQuery, JSON, PHP, XML
  • Proficiency in databases, including analytical (e.g., mySQL, MongoDB, PostgreSQL, DynamoDB, Redis, Hive, Elastic etc.)
  • Knowledge of architecting or implementing search APIs
  • Great understanding of data modeling and RESTful APIs
  • Strong knowledge of CS fundamentals, data structures, algorithms, and design patterns
  • Strong analytical, consultative, and communication skills
  • Excellent understanding of Microsoft office tools : excel, word, powerpoint etc.
  • Excellent organizational and time management skills
  • Experience with responsive and adaptive design (Web, Mobile & App)
  • Should be a self starter and have ability to work without being supervised
  • Excellent debugging and optimization skills
  • Experience building high throughput/low latency systems.
  • Knowledge of big data systems such as Cassandra, Elastic, Kafka, Kubernetes, and Docker
  • Should be willing to be a part of a small team and working in fast-paced environment
  • Should be highly passionate about building products that create a significant impact.
  • Should have experience in user experience design, website optimization techniques and different PIM tools


Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Gurugram
4 - 7 yrs
₹10L - ₹15L / yr
NodeJS (Node.js)
MongoDB
Mongoose
Express
Microservices
+12 more

Job description

  • Engage with the business team and stakeholder at different levels to understand business needs, analyze, document, prioritize the requirements, and make recommendations on the solution and implementation.
  • Delivering the product that meets business requirements, reliability, scalability, and performance goals
  • Work with Agile scrum team and create the scrum team strategy roadmap/backlog, develop minimal viable product and Agile user stories that drive a highly effective and efficient project development and delivery scrum team.
  • Work on Data mapping/transformation, solution design, process diagram, acceptance criteria, user acceptance testing and other project artifacts.
  • Work effectively with the technical/development team and help them understand the specifications/requirements for technical development, testing and implementation.
  • Ensure solutions promote simplicity, efficiency, and conform to enterprise and architecture standards and guidelines.
  • Partner with the support organization to provide training, support and technical assistance to operation team and end users as necessary
  • Product/Application Developer
  • Designs and develops software applications based on user requirements in a variety of coding environments such as graphical user interface, database query languages, report writers, and specific development languages
  • Consult on the use and implementation of software products and applications and specialize in the business development environment, including the selection of development tools and methodology

Primary / Mandatory skills:


  • Overall Experience: Overall 4 to 6 years of IT development experience
  • Design and Code NodeJS based Microservices, API Webservices, NoSql technologies (Cassandra/MongoDb)
  • Expert in developing code for Node-JS based Microservice in TypeScript
  • Good Experience in understanding the data Transmission through pug/sub mechanism like Event Hub and Kafka
  • Good Understanding of Analytics and clickstream data capture is HUGE Plus
  • Good understanding of frameworks like Java Spring Boot, Python is preferred
  • Good understanding of Microsoft Azure principles and services is preferred
  • Able to write Unit test cases
  • Familiarity with performance testing tools such as Akamai SOASTA is preferred
  • Good knowledge on Source Code control like GIT, code clout, etc and understanding of CI/CD(Jenkins and Kubernetes)
  • Solid technical background with understanding and/or experience in software development and web technologies
  • Strong analytical skills and the ability to convert consumer insights and performance data into high impact initiatives
  • Experience working within scaled agile development team
  • Excellent written and verbal communication skills with demonstrated ability to present complex technical information in a clear manner to peers, developers, and senior leaders
  • The desire to be continually learning about emerging technologies/industry trends


Read more
StashAway
Joshua YAP
Posted by Joshua YAP
Remote only
3 - 6 yrs
S$3K - S$9K / yr
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
EKS
+3 more

We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:


  • Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
  • Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
  • Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
  • Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
  • Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork


Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).



Read more
Kloud9 Technologies
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹20L / yr
Amazon Web Services (AWS)
Amazon EMR
EMR
Spark
PySpark
+9 more

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


What we are looking for:

● 3+ years’ experience developing Data & Analytic solutions

● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark

● Experience with relational SQL

● Experience with scripting languages such as Shell, Python

● Experience with source control tools such as GitHub and related dev process

● Experience with workflow scheduling tools such as Airflow

● In-depth knowledge of scalable cloud

● Has a passion for data solutions

● Strong understanding of data structures and algorithms

● Strong understanding of solution and technical design

● Has a strong problem-solving and analytical mindset

● Experience working with Agile Teams.

● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

● Able to quickly pick up new programming languages, technologies, and frameworks

● Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
Python
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more
LiftOff Software India

at LiftOff Software India

2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more
Virtusa

at Virtusa

2 recruiters
Priyanka Sathiyamoorthi
Posted by Priyanka Sathiyamoorthi
Chennai
11 - 15 yrs
₹15L - ₹33L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

We are looking for a Big Data Engineer with java for Chennai Location

Location : Chennai 

Exp : 11 to 15 Years 



Job description

Required Skill:

1. Candidate should have minimum 7 years of experience as total

2. Candidate should have minimum 4 years of experience in Big Data design and development

3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python 

4. Candidate should have experience in any RDBMS.

Roles & Responsibility:

1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.

2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.

3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.

4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation


Regards,

Priyanka S

7P8R9I9Y4A0N8K8A7S7

Read more
codersbrain

at codersbrain

1 recruiter
Aishwarya Hire
Posted by Aishwarya Hire
Bengaluru (Bangalore)
4 - 6 yrs
₹8L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
  • Design the architecture of our big data platform
  • Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
  • Design and implement data stores that support the scalable processing and storage of our high-frequency data
  • Maintain our data pipeline
  • Customize and oversee integration tools, warehouses, databases, and analytical systems
  • Configure and provide availability for data-access tools used by all data scientists


Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
Java
Python
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Pune
0 - 1 yrs
₹10L - ₹15L / yr
Java
J2EE
Spring Boot
Hibernate (Java)
SQL
+6 more
1. Work closely with senior engineers to design, implement and deploy applications that impact the business with an emphasis on mobile, payments, and product website development
2. Design software and make technology choices across the stack (from data storage to application to front-end)
3. Understand a range of tier-1 systems/services that power our product to make scalable changes to critical path code
4. Own the design and delivery of an integral piece of a tier-1 system or application
5. Work closely with product managers, UX designers, and end users and integrate software components into a fully functional system
6. Work on the management and execution of project plans and delivery commitments
7. Take ownership of product/feature end-to-end for all phases from the development to the production
8. Ensure the developed features are scalable and highly available with no quality concerns
9. Work closely with senior engineers for refining and implementation
10. Manage and execute project plans and delivery commitments
11. Create and execute appropriate quality plans, project plans, test strategies, and processes for development activities in concert with business and project management efforts
Read more
Concentric AI

at Concentric AI

7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
2 - 10 yrs
₹2L - ₹50L / yr
Software Testing (QA)
Test Automation (QA)
Python
Jenkins
Automation
+9 more
•3-10  years of experience in test automation for distributed scalable software
• Good QA engineering background with proven automation skills
• Able to understand, design and define approach for automation (Backend/UI/service)
• Design and develop automation scripts for QA testing and tools for quality measurements
• Good to have knowledge of Microservices, API, Web services testing
• Strong in Cloud Engineering skillsets (performance, response time, horizontal scale testing)
• Expertise using automation tools/frameworks (Pytest, Jenkins, Robot, etc)
• Expert at one of the scripting languages – Python, shell, etc
• High level system admin skills to configure and manage test environments
• Basics of Kubernetes and databases like Cassandra, Elasticsearch, MongoDB, etc
• Must have worked in agile environment with CI/CD knowledge
• Having security testing background is a plus
Read more
Concentric AI

at Concentric AI

7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
3 - 10 yrs
₹4L - ₹50L / yr
Docker
Kubernetes
DevOps
Python
Jenkins
+9 more
• 3-10 yrs of industry experience
• Energetic self-starter, fast learner, with a desire to work in a startup environment
• Experience working with Public Clouds like AWS
• Operating and Monitoring cloud infrastructure on AWS
• Primary focus on building, implementing and managing operational support
• Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure
• Expert at one of the scripting languages – Python, shell, etc
• Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
• Handling load monitoring, capacity planning, services monitoring
• Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues
• Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Hyderabad
7 - 12 yrs
₹12L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Skills

Proficient experience of minimum 7 years into Hadoop. Hands-on experience of minimum 2 years into AWS - EMR/ S3 and other AWS services and dashboards. Good experience of minimum 2 years into Spark framework. Good understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin. Responsible for troubleshooting and recommendation for Spark and MR jobs. Should be able to use existing logs to debug the issue. Responsible for implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting Triage production issues when they occur with other operational teams. Hands on experience to troubleshoot incidents, formulate theories and test hypothesis and narrow down possibilities to find the root cause.
Read more
Classplus

at Classplus

1 video
4 recruiters
Peoples Office
Posted by Peoples Office
Noida
8 - 10 yrs
₹35L - ₹55L / yr
Docker
Kubernetes
DevOps
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
+16 more

About us

 

Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.

 

Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.

 

Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!

 

 

What will you do?

 

· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective

 

· Create standardized tooling and templates for development teams to create CI/CD pipelines

 

· Ensure infrastructure is created and maintained using terraform

 

· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.

 

· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation

 

· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs

 

 

You should apply, if you

 

 

1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)

 

2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.

 

3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning

 

4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool

 

5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s

 

6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)

 

7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.

 

8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus

 

 

Being Part of the Clan

 

At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!

 

It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️

 

Are you a go-getter with the chops to nail what you do? Then this is the place for you.

Read more
Concentric AI

at Concentric AI

7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
4 - 10 yrs
₹10L - ₹45L / yr
Python
Shell Scripting
DevOps
Amazon Web Services (AWS)
Infrastructure architecture
+7 more
About us:

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.

There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.

Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.

That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.

Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/

Title: Cloud DevOps Engineer 

Role: Individual Contributor (4-8 yrs)  

      

Requirements: 

  • Energetic self-starter, a fast learner, with a desire to work in a startup environment  
  • Experience working with Public Clouds like AWS 
  • Operating and Monitoring cloud infrastructure on AWS. 
  • Primary focus on building, implementing and managing operational support 
  • Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure. 
  • Expert at one of the scripting languages – Python, shell, etc  
  • Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc 
  • Handling load monitoring, capacity planning, and services monitoring. 
  • Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues. 
  • Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more
RaRa Now

at RaRa Now

3 recruiters
N SHUBHANGINI
Posted by N SHUBHANGINI
Remote only
3 - 5 yrs
₹7L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

About RARA NOW :

  • RaRa Now is revolutionizing instant delivery for e-commerce in Indonesia through data-driven logistics.

  • RaRa Now is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimization technology. RaRa makes it possible for anyone, anywhere to get same-day delivery in Indonesia. While others are focusing on - one-to-one- deliveries, the company has developed proprietary, real-time batching tech to do - many-to-many- deliveries within a few hours. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan, and many more.

  • We are a distributed team with the company headquartered in Singapore, core operations in Indonesia, and a technology team based out of India.

Future of eCommerce Logistics :

  • Data driven logistics company that is bringing in same-day delivery revolution in Indonesia

  • Revolutionizing delivery as an experience

  • Empowering D2C Sellers with logistics as the core technology

About the Role :

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Prior experience on working on Big Query, Redshift or other data warehouses
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Chennai, Hyderabad
5 - 10 yrs
₹10L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Bigdata with cloud:

 

Experience : 5-10 years

 

Location : Hyderabad/Chennai

 

Notice period : 15-20 days Max

 

1.  Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight

2.  Experience in developing lambda functions with AWS Lambda

3.  Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark

4.  Should be able to code in Python and Scala.

5.  Snowflake experience will be a plus

Read more
Remote only
5 - 8 yrs
₹10L - ₹25L / yr
DevOps
Kubernetes
Docker
SAS
Apache Hive
+2 more

Must Have skills :

Experience in Linux Administration

Experience in building, deploying, and monitoring distributed apps using container systems (Docker) and container orchestration (Kubernetes, EKS)

Ability to read and understand code (Java / Python / R / Scala)

Experience AWS and tools

 

Nice to have skills:

Experience in SAS Viya administration

Experience managing large Big Data clusters

Experience in Big Data tools like Hue, Hive, Spark, Jupyter, SAS and R-Studio

Read more
Encubate Tech Private Ltd
Mumbai
5 - 6 yrs
₹15L - ₹20L / yr
Amazon Web Services (AWS)
Amazon Redshift
Data modeling
ITL
Agile/Scrum
+7 more

Roles and

Responsibilities

Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to

help us in configure and develop new AWS environments for our Enterprise Data Lake,

migrate the on-premise traditional workloads to cloud. Must have a sound

understanding of BI best practices, relational structures, dimensional data modelling,

structured query language (SQL) skills, data warehouse and reporting techniques.

 Extensive experience in providing AWS Cloud solutions to various business

use cases.

 Creating star schema data models, performing ETLs and validating results with

business representatives

 Supporting implemented BI solutions by: monitoring and tuning queries and

data loads, addressing user questions concerning data integrity, monitoring

performance and communicating functional and technical issues.

Job Description: -

This position is responsible for the successful delivery of business intelligence

information to the entire organization and is experienced in BI development and

implementations, data architecture and data warehousing.

Requisite Qualification

Essential

-

AWS Certified Database Specialty or -

AWS Certified Data Analytics

Preferred

Any other Data Engineer Certification

Requisite Experience

Essential 4 -7 yrs of experience

Preferred 2+ yrs of experience in ETL & data pipelines

Skills Required

Special Skills Required

 AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.

 Bigdata: Databricks, Spark, Glue and Athena

 Expertise in Lake Formation, Python programming, Spark, Shell scripting

 Minimum Bachelor’s degree with 5+ years of experience in designing, building,

and maintaining AWS data components

 3+ years of experience in data component configuration, related roles and

access setup

 Expertise in Python programming

 Knowledge in all aspects of DevOps (source control, continuous integration,

deployments, etc.)

 Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD

 Hands on ETL development experience, preferably using or SSIS

 SQL Server experience required

 Strong analytical skills to solve and model complex business requirements

 Sound understanding of BI Best Practices/Methodologies, relational structures,

dimensional data modelling, structured query language (SQL) skills, data

warehouse and reporting techniques

Preferred Skills

Required

 Experience working in the SCRUM Environment.

 Experience in Administration (Windows/Unix/Network/Database/Hadoop) is a

plus.

 Experience in SQL Server, SSIS, SSAS, SSRS

 Comfortable with creating data models and visualization using Power BI

 Hands on experience in relational and multi-dimensional data modelling,

including multiple source systems from databases and flat files, and the use of

standard data modelling tools

 Ability to collaborate on a team with infrastructure, BI report development and

business analyst resources, and clearly communicate solutions to both

technical and non-technical team members

Read more
Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹25L / yr
Cassandra
Technical Architecture
Debugging
Communication Skills

Casandra Architeture- 7+ Yrs- Bangalore

 

Strong knowledge of Cassandra Architecture including read/write paths, hinted handoffs, read repairs, compaction, cluster/replication strategies, client drivers, caching, GC Tuning.

                Experience in writing queries and performance tuning.

                Experience in handling real time Cassandra clusters, debugging and resolution of issues.

                Experience in implementing Keyspaces, Table, Indexes, security, data models & access administration.

                Knowledge in cassandra backup and recovery.

                Good communication skills.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort