50+ Big data Jobs in India
Apply to 50+ Big data Jobs on CutShort.io. Find your next job, effortlessly. Browse Big data Jobs and apply today!


We Help Our Customers Build Great Products.
Innominds is a trusted innovation acceleration partner focused on designing, developing and delivering technology solutions for specialized practices in Big Data & Analytics, Connected Devices, and Security, helping enterprises with their digital transformation initiatives. We built these practices on top of our foundational services of innovation, like UX/UI, application development and testing.
Over 1,000 people strong, we are a pioneer at the forefront of technology and engineering R& D, priding ourselves as being forward thinkers and anticipating market changes to help our clients stay relevant and competitive.
About the Role:
We are looking for a seasoned Data Engineering Lead to help shape and evolve our data platform. This role is both strategic and hands-on—requiring leadership of a team of data engineers while actively contributing to the design, development, and maintenance of robust data solutions.
Key Responsibilities:
- Lead and mentor a team of Data Engineers to deliver scalable and reliable data solutions
- Own the end-to-end architecture and development of data pipelines, data lakes, and warehouses
- Design and implement batch data processing frameworks to support large-scale analytics
- Define and enforce best practices in data modeling, data quality, and system performance
- Collaborate with cross-functional teams to understand data requirements and deliver insights
- Ensure smooth and secure data ingestion, transformation, and export processes
- Stay current with industry trends and apply them to drive improvements in the platform
Requirements
- Strong programming skills in Python
- Deep expertise in Apache Spark, Big Data ecosystems, and Airflow
- Hands-on experience with Azure cloud services and data engineering tools
- Strong understanding of data architecture, data modeling, and data governance practices
- Proven ability to design scalable data systems and enterprise-level solutions
- Strong analytical mindset and problem-solving skills
For our company to deliver world-class products and services, our business depends on recruiting and hiring the best and the brightest from around the globe. We are looking for the engineers, designers and creative problem solvers that stand out from the rest of the crowd but are also humble enough to continue learning and growing, are eager to tackle complex problems and are able to keep up with the demanding pace of our business. We are looking for YOU!
Job Summary:
We are seeking passionate Developers with experience in Microservices architecture to join our team in Noida. The ideal candidate should have hands-on expertise in Java, Spring Boot, Hibernate, and front-end technologies like Angular, JavaScript, and Bootstrap. You will be responsible for developing enterprise-grade software applications that enhance patient safety worldwide.
Key Responsibilities:
- Develop and maintain applications using Microservices architecture.
- Work with modern technologies like Java, Spring Boot, Hibernate, Angular, Kafka, Redis, and Hazelcast.
- Utilize AWS, Git, Nginx, Tomcat, Oracle, Jira, Confluence, and Jenkins for development and deployment.
- Collaborate with cross-functional teams to design and build scalable enterprise applications.
- Develop intuitive UI/UX components using Bootstrap, jQuery, and JavaScript.
- Ensure high-performance, scalable, and secure applications for Fortune 100 pharmaceutical companies.
- Participate in Agile development, managing changing priorities effectively.
- Conduct code reviews, troubleshoot issues, and optimize application performance.
Required Skills & Qualifications:
- 5+ years of hands-on experience in Java 7/8, Spring Boot, and Hibernate.
- Strong knowledge of OOP concepts and Design Patterns.
- Experience working with relational databases (Oracle/MySQL).
- Proficiency in Bootstrap, JavaScript, jQuery, HTML, and Angular.
- Hands-on experience in Microservices-based application development.
- Strong problem-solving, debugging, and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to adapt to new technologies and manage multiple priorities.
- Experience in developing high-quality web applications.
Good to Have:
- Exposure to Kafka, Redis, and Hazelcast.
- Experience working with cloud-based solutions (AWS preferred).
- Familiarity with DevOps tools like Jenkins, Docker, and Kubernetes.
· 5+ years of experience in software development using Java.
· Proficiency in Spring Boot and Spring Batch.
· Experience with microservices architecture.
· Hands-on experience with Cassandra or similar NoSQL databases.
· Solid understanding of cloud platforms (AWS, GCP, Azure, etc.).
· Familiarity with Docker and Kubernetes.
· Experience with CI/CD tools such as Jenkins etc
· Strong problem-solving skills and attention to detail.
· Excellent communication and teamwork skills.
Important consideration:
· Core Java - 4 to 6 Yrs
· Spring and Spring Boot, Spring MVC, Spring Data, Spring Security - 4 to 6 Yrs
· DevOps (Jenkins, Junit, sonarQube, Maven) - 1 to 2 Yrs
· MongoDB, NOSql, Couch DB, Cassandra - 1 to 2 Yrs


Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.


Role: Data Scientist
Location: Bangalore (Remote)
Experience: 3+ years
Skills Required - Radiology, visual images, text, classical model, LLM multi model
JOB DESCRIPTION
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment
Job Title: Big Data Engineer (Java Spark Developer – JAVA SPARK EXP IS MUST)
Location: Chennai, Hyderabad, Pune, Bangalore (Bengaluru) / NCR Delhi
Client: Premium Tier 1 Company
Payroll: Direct Client
Employment Type: Full time / Perm
Experience: 7+ years
Job Description:
We are looking for a skilled Big Data Engineers using Java Spark with 7+ years of experience in Big Data / legacy platforms, who can join immediately. Desired candidate should have design, development and optimization of real-time & batch data pipelines experience in Big Data environment at an enterprise scale applications. You will work on building scalable and high-performance data processing solutions, integrating real-time data streams, and building a reliable Data platforms. Strong troubleshooting, performance tuning, and collaboration skills are key for this role.
Key Responsibilities:
· Develop data pipelines using Java Spark and Kafka.
· Optimize and maintain real-time data pipelines and messaging systems.
· Collaborate with cross-functional teams to deliver scalable data solutions.
· Troubleshoot and resolve issues in Java Spark and Kafka applications.
Qualifications:
· Experience in Java Spark is must
· Knowledge and hands-on experience using distributed computing, real-time data streaming, and big data technologies
· Strong problem-solving and performance optimization skills
· Looking for immediate joiners
If interested, please share your resume along with the following details
1) Notice Period
2) Current CTC
3) Expected CTC
4) Have Experience in Java Spark - Y / N (this is must)
5) Any offers in hand
Thanks & Regards,
LION & ELEPHANTS CONSULTANCY PVT LTD TEAM
SINGAPORE | INDIA


Job Title : Principal Software Architect – AI/ML & Product Innovation
Location : Bangalore, Karnataka & Trichy, Tamilnadu, India (No remote work available)
Company : Zybisys Consulting Services LLP
Reports To : CEO
Job Type : Full-Time
Experience Required: Minimum of 10+ years in software development, with at least 5 years in software architect role.
About Us:
At Zybisys, we’re not just another cloud hosting and software development company—we’re all about pushing boundaries in the FinTech world. We don’t just solve problems; we rethink how businesses operate, making things smoother, smarter, and more efficient. Our tech helps FinTech companies stay ahead in the digital game with confidence and flexibility.
Innovation is in our DNA, and we’re always on the lookout for bold thinkers who can tackle big challenges with creativity and precision. At Zybisys, we believe in growing together, nurturing talent, and building a future where technology transforms the way FinTech works.
Role Overview:
We're looking for a Principal Software Architect who’s passionate about AI/ML and product innovation. In this role, you’ll be at the forefront of designing and building smart, AI-driven solutions that tackle complex business challenges. You’ll work closely with teams across product, development, and research to shape our tech strategy and ensure everything aligns with our next-gen platform. If you love pushing the boundaries of technology and driving real innovation, this is the role for you!
Key Responsibilities:
- Architect & Design: Architect, design, and develop large-scale distributed cloud services and solutions with a focus on AI/ML, high availability, scalability, and robustness. Design scalable and efficient solutions, considering factors such as performance, security, and cost-effectiveness.
- AI/ML Integration: Spearhead the application of AI/ML in solving business problems at scale. Stay at the forefront of AI/ML technologies, trends, and industry standards to provide cutting-edge solutions
- Product Roadmap : Work closely with Product Management to set the technical product roadmap, definition, and direction. Analyze the current technology landscape and identify opportunities for improvement and innovation.
- Technology Evaluation: Evaluate different programming languages and frameworks to determine the most suitable ones for project requirements
- Component Design: Develop and oversee the creation of modular software components that can be reused and adapted across different projects.
- UI/UX Collaboration: Work closely with design teams to craft intuitive and engaging user interfaces and experiences.
- Project Oversight: Oversee projects from initiation to completion, creating project plans, defining objectives, and managing resources effectively
- Team Mentorship: Guide and inspire a team of engineers and designers, fostering a culture of continuous learning and improvement.
- Innovation & Ideation: Champion the generation of new ideas for product features, staying ahead of industry trends and customer needs.
- Research & Development: Leading initiatives that explore new technologies or methodologies.
- Strategic Planning: Participating in high-level decisions that shape the direction of products and services.
- Industry Influence: Representing the company in industry forums or partnerships with academic institutions.
- Open-Source Community Handling: Manage and contribute to the open-source community, fostering collaboration, sharing knowledge, and ensuring adherence to open-source best practices.
Qualifications:
- Experience: Minimum of 10 years in software development, with at least 5 years in a scalable software architect role.
- Technical Expertise: Proficient in software architecture, AI/ML technologies, and UI/UX principles.
- Leadership Skills: Proven track record of mentoring teams and driving cross-functional collaboration.
- Innovative Mindset: Demonstrated ability to think creatively and introduce groundbreaking ideas.
- Communication: Excellent verbal and written skills, with the ability to engage effectively with both technical and non-technical stakeholders.
- Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
What We Offer:
- A dynamic work environment where your ideas truly matter.
- Opportunities to attend and speak at industry conferences.
- Collaboration with cutting-edge technology and tools.
- A culture that values innovation, autonomy, and personal growth.
Dear Candidate,
We are urgently looking for a Release- Big data Engineer For Pune Location.
Experience : 5-8 yrs
Location : Pune
Skills: Big data Engineer , Release Engineer ,DevOps, Aws/Azure/GCP Cloud exp. ,
JD:
- Oversee the end-to-end release lifecycle, from planning to post-production monitoring. Coordinate with cross-functional teams (DBA, BizOps, DevOps, DNS).
- Partner with development teams to resolve technical challenges in deployment and automation test runs
- Work with shared services DBA teams for schema-based multi-tenancy designs and smooth migrations.
- Drive automation for batch deployments and DR exercises. YAML based micro service deployment using shell/Python/Go
- Provide oversight for Big Data toolsets for deployment (e.g., Spark, Hive, HBase) in private cloud and public cloud CDP environments
- Ensure high-quality releases with a focus on stability and long-term performance.
- Able to run the automation batch scripts and debug the deployment and functional aspects/ work with dev leads to resolve the release cycle issues.
Regards,
Minakshi Soni
Executive- Talent Acquisition
Rigel Networks
At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
Job Title : Senior AWS Data Engineer
Experience : 5+ Years
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.

Lead Data Scientist
Job Description
As a Lead Data Scientist, you will be responsible for identifying, scoping, and delivering data science projects with a strong emphasis on causal inference.
The ability to take large, scientifically complex projects and break them down into manageable hypotheses and experiments to inform functional specifications, and then deliver features in a successful and timely manner, is expected. Maturity, high judgment, negotiation skills, ability to influence are essential to success in this role.
We will rely on your experience in successfully delivering projects that significantly, positively, and measurably affect the business. You should also have experience in large-scale data science projects.
What You'll Do
• Work closely with the Tech team to convert those POC into fully scalable products.
• Actively identify existing and new features which could benefit from predictive modelling and productionization of predictive models
• Actively identify and resolve strategic issues that may impair the team’s ability to meet strategic, scientific, and technical goals
• Contribute to research and development of AI/ML techniques and technology that fuels the business innovation and growth of -
• Work closely with engineers to deploy models in production both in real time and in batch process and systematically track model performance
• Encourage team building, best practices sharing especially with more junior team members
Requirements & Skills
• Strong problem solving skills with an emphasis on product development
• Master’s or PhD degree in Statistics, Mathematics, Computer Science, or another quantitative field
• More than 8 years of experience in practicing machine learning and data science in business or a related field, with a focus on statistical analysis
• Strong background in statistics and causal inference.
• Proficient in statistical programming languages such as Python or R, and data visualization tools (e.g., Tableau, Power BI).
• Strong experience with machine learning algorithms and statistical modeling techniques
• Strong computing/programming skills; Proficient in Python, Spark, SQL, Linux shell script.
• Proven ability to work with large datasets and familiarity with big data technologies (e.g., Hadoop, Spark, SQL) is a plus.
• Experience with end to end feature development (owning feature definition, roadmap development and experimentation
• Effective leadership and communication skills, with the ability to inspire and guide a team.
• Excellent problem solving and critical thinking capabilities.
• Strong experience in Cloud technology.


Associate- Manager Engineering, -
Location: Bangalore, Karnataka
Type: Full-Time
Reports to: VP-Engineering
Job Purpose:
As a Engineering Manager, you will be responsible for developing, leading and managing a team of software engineers to deliver high-quality software products and solutions. This role involves a combination of technical expertise, project management, and people management to drive the software development process and achieve organizational goals.
Key Responsibilities:
Leadership and Team Development:
• Lead, mentor, and coach a team of software engineers, fostering a collaborative and innovative work environment.
Goal Setting and Feedback:
• Set clear goals and expectations for team members, and provide regular feedback and performance evaluations.
Project Management:
• Oversee the planning, execution, and successful delivery of software development projects.
• Define project scope, objectives, and timelines. Allocate resources effectively and ensure project success.
Technical Expertise and Guidance:
• Provide technical expertise and guidance to the development team, helping them make informed decisions and solve complex technical challenges.
Industry Best Practices:
• Stay updated on industry best practices and emerging technologies.
Process Improvement:
• Implement and improve software development processes, methodologies, and standards, such as Agile or Scrum, to enhance productivity and software quality.
Quality Assurance:
• Ensure code reviews, testing, and quality assurance processes are in place and followed.
Collaboration and Communication:
• Collaborate with product managers, business stakeholders, and other teams to define project requirements, prioritize tasks, and provide regular updates on project status.
• Act as a bridge between technical and non-technical stakeholders.
Resource Management:
• Manage resources effectively, including budget allocation, staffing, and workload distribution.
• Identify and address resource constraints or skill gaps.
Risk Management:
• Identify project risks and develop mitigation plans to address potential challenges or roadblocks.
• Proactively address issues that may impact project timelines or quality.
Quality Standards:
• Ensure that software products meet high-quality standards by implementing testing and quality control processes.
• Address and resolve software defects and issues
Qualifications & Experience:
• Bachelor's or Master's degree in Computer Science, Software Engineering, or related field.
• Over 12+ year of proven experience in leading software product development, software architecture and design for complex systems.
• Strong Hands-on experience in SDLC, AGILE, coding and design experience.
• Strong understanding of programming languages, frameworks, and tools relevant to software development.
• Proven experience in software development with a strong technical background.
• Prior experience in a leadership or management role, with a track record of successfully leading software development teams.
Skills & Attributes:
Technical Skills:
• Proficiency in software development methodologies (AGILE, SDLC), tools, and best practices.
• Prior working experience in the technologies like Python API development(API frameworks including Fast API, Rest API)
• Prior working experience in the UI technologies like React JS, Redux , HTML5/CSS and Java Scripting
• Good working experience in RDBMS like PostgreSQL, hands-on experience in SQL is must
• Nice to have experience in technologies like Spark, Hive
• Experience in building enterprise scale SaaS software products using Microservices architecture and cloud platform like AWS and Azure
• Experience in designing and implementing scalable, distributed systems is preferred.
• Proficiency in ensuring code quality, unit testing, and adherence to coding standards.
• Familiarity with AI/ML concepts and their application is advantageous.
Soft Skills:
• Communication: Ability to articulate complex technical concepts to non-technical stakeholders, as well as to developers and other technical staff.
• Teamwork: Collaborate effectively with various teams (development, QA, product, etc.).
• Problem Solving: Address unforeseen issues and come up with innovative solutions.
• Decision Making: Make informed decisions that consider technical feasibility, business needs, and potential risks.
• Leadership: Provide guidance, mentorship, and direction to engineering teams.
• Time Management: Prioritize tasks effectively to meet deadlines and product milestones.
• Continuous Learning: Stay updated with the latest in technology trends, methodologies, and best practices.
Business-Oriented Skills:
• Product Mindset: Understand the business objectives, user needs, and how technology can align with and fulfill those needs.
• Stakeholder Management: Collaborate and communicate effectively with stakeholders to gather requirements, provide updates, and gather feedback.
• Project Management: Familiarity with project management methodologies (like Agile or Waterfall) to ensure timely product delivery.
• Strategic Thinking: Ability to align technological strategies with business goals and foresee potential technological challenges or opportunities.
• Cost Management: Understand the financial aspects, such as the costs of certain technological solutions, ROI, and TCO.
Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Dear Candidate,
We are Urgently hiring QA Automation Engineers and Test leads At Hyderabad and Bangalore
Exp: 6-10 yrs
Locations: Hyderabad ,Bangalore
JD:
we are Hiring Automation Testers with 6-10 years of Automation testing experience using QA automation tools like Java, UFT, Selenium, API Testing, ETL & others
Must Haves:
· Experience in Financial Domain is a must
· Extensive Hands-on experience in Design, implement and maintain automation framework using Java, UFT, ETL, Selenium tools and automation concepts.
· Experience with AWS concept and framework design/ testing.
· Experience in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.
· Experience with Databricks, Python, Spark, Hive, Airflow, etc.
· Experience in validating and analyzing kubernetics log files.
· API testing experience
· Backend testing skills with ability to write SQL queries in Databricks and in Oracle databases
· Experience in working with globally distributed Agile project teams
· Ability to work in a fast-paced, globally structured and team-based environment, as well as independently
· Experience in test management tools like Jira
· Good written and verbal communication skills
Good To have:
- Business and finance knowledge desirable
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Worldwide Locations: USA | HK | IN
Job Summary:
We are seeking a skilled Senior Data Engineer with expertise in application programming, big data technologies, and cloud services. This role involves solving complex problems, designing scalable systems, and working with advanced technologies to deliver innovative solutions.
Key Responsibilities:
- Develop and maintain scalable applications using OOP principles, data structures, and problem-solving skills.
- Build robust solutions using Java, Python, or Scala.
- Work with big data technologies like Apache Spark for large-scale data processing.
- Utilize AWS services, especially Amazon Redshift, for cloud-based solutions.
- Manage databases including SQL, NoSQL (e.g., MongoDB, Cassandra), with Snowflake as a plus.
Qualifications:
- 5+ years of experience in software development.
- Strong skills in OOPS, data structures, and problem-solving.
- Proficiency in Java, Python, or Scala.
- Experience with Spark, AWS (Redshift mandatory), and databases (SQL/NoSQL).
- Snowflake experience is good to have.
At DocNexus, we’re revolutionizing how life sciences companies search and generate insights. Our search platform unlocks powerful insights, and we're seeking a Customer Success Team Member with strong technical skills to help our customers harness its full potential.
What you’ll do:
- Customer Support: Troubleshoot and resolve customer queries, particularly around referral reports, data anomalies, and data generation using our platform.
- Data Queries (BigQuery/ClickHouse): Respond to customer requests for custom data queries, working with large datasets in BigQuery and ClickHouse to deliver precise insights.
- Onboarding & Training: Lead onboarding for new customers, guide teams on platform usage, and manage access requests.
- Listen & Improve: Collect and act on customer feedback to continuously improve the platform, collaborating with the product team to enhance functionality.
- Technical Documentation: Assist with technical resources and help create training materials for both internal and customer use.
What you bring:
- Strong Technical Skills: Proficient in querying with BigQuery and ClickHouse. Comfortable working with complex data, writing custom queries, and resolving technical issues.
- Customer-Focused: Excellent communication skills, able to translate technical data insights to non-technical users and provide solutions clearly and effectively.
- Problem-Solver: Strong analytical skills and a proactive mindset to address customer needs and overcome challenges in a fast-paced environment.
- Team Player: Work collaboratively with both internal teams and customers to ensure success.
If you're passionate about data, thrive in a technical environment, and are excited to support life sciences teams in their data-driven decision-making, we'd love to hear from you!

Description
Come Join Us
Experience.com - We make every experience matter more
Position: Senior GCP Data Engineer
Job Location: Chennai (Base Location) / Remote
Employment Type: Full Time
Summary of Position
A Senior Data Engineer is a professional who specializes in preparing big data infrastructure for analytical or operational uses. He/She is responsible for develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. They collaborate with data scientists and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organisation.
Responsibilities:
- Collaborate with cross-functional teams to define, prioritize, and execute data engineering initiatives aligned with business objectives.
- Design and implement scalable, reliable, and secure data solutions by industry best practices and compliance requirements.
- Drive the adoption of cloud-native technologies and architectural patterns to optimize the performance, cost, and reliability of data pipelines and analytics solutions.
- Mentor and lead a team of Data Engineers.
- Demonstrate a drive to learn and master new technologies and techniques.
- Apply strong problem-solving skills with an emphasis on building data-driven or AI-enhanced products.
- Coordinate with ML/AI and engineering teams to understand data requirements.
Experience & Skills:
- 8+ years of Strong experience in ETL and ELT data from various sources in Data Warehouses
- 8+ years of experience in Python, Pandas, Numpy, and SciPy.
- 5+ years of Experience in GCP
- 5+ years of Experience in BigQuery, PySpark, and Pub/Sub
- 5+ years of Experience working with and creating data architectures.
- Certified in Google Cloud Professional Data Engineer.
- Advanced proficiency in Google Cloud services such as Dataflow, Dataproc, Dataprep, Data Studio, and Cloud Composer.
- Proficient in writing complex Spark (PySpark) User Defined Functions (UDFs), Spark SQL, and HiveQL.
- Good understanding of Elastic search.
- Experience in assessing and ensuring data quality, data testing, and addressing data quality issues.
- Excellent understanding of Spark architecture and underlying frameworks including storage management.
- Solid background in database design and development, database administration, and software engineering across full life cycles.
- Experience with NoSQL data stores like MongoDB, DocumentDB, and DynamoDB.
- Knowledge of data governance principles and practices, including data lineage, metadata management, and access control mechanisms.
- Experience in implementing and optimizing data security controls, encryption, and compliance measures in GCP environments.
- Ability to troubleshoot complex issues, perform root cause analysis, and implement effective solutions in a timely manner.
- Proficiency in data visualization tools such as Tableau, Looker, or Data Studio to create insightful dashboards and reports for business users.
- Strong communication and interpersonal skills to effectively collaborate with technical and non-technical stakeholders, articulate complex concepts, and drive consensus.
- Experience with agile methodologies and project management tools like Jira or Asana for sprint planning, backlog grooming, and task tracking.


Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Skills:
Experience with Cassandra, including installing configuring and monitoring a Cassandra cluster.
Experience with Cassandra data modeling and CQL scripting. Experience with DataStax Enterprise Graph
Experience with both Windows and Linux Operating Systems. Knowledge of Microsoft .NET Framework (C#, NETCore).
Ability to perform effectively in a team-oriented environment

Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- SFDC(Data modelling experience) would be given preference
- Good understanding in object-oriented concepts and hands on experience on Scala with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Scala
- Good experience in SQL – should be able to write complex queries.
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (preferred)
- Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (preferred)
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on AWS (preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
As a Senior Analyst you will play a crucial role in improving customer experience, retention, and growth by identifying opportunities in the moment that matter and surfacing them to teams across functions. You have experience with collecting, stitching and analyzing Voice of Customer and CX data, and are passionate about customer feedback. You will partner with the other Analysts on the team, as well as the Directors, in delivering impactful presentations that drive action across the organization.
You come with an understanding of customer feedback and experience analytics and the ability to tell a story with data. You are a go-getter who can take a request and run with it independently but also does not shy away from collaborating with the other team members. You are flexible and thrive in a fast-paced environment with changing priorities.
Responsibilities
- Stitch and analyze data from primary/secondary sources to determine key drivers of customer success, loyalty, risk, churn and overall experience.
- Verbalize and translate these insights into actionable tasks.
- Develop monthly and ad-hoc reporting.
- Maintain Qualtrics dashboards and surveys.
- Take on ad-hoc tasks related to XM platform onboarding, maintenance or launch of new tools
- Understand data sets within the Data Services Enterprise Data platform and other systems
- Translate user requirements independently
- Work with Business Insights, IT and technical teams
- Create PowerPoint decks and present insights to stakeholders
Qualifications
- Bachelor’s degree in data science/Analytics, Statistics or Business. Master’s degree is a plus.
- Extract data and customer insights, analyze audio or speech, and present them in any visualization tool.
- 5+ years of experience in an analytical, customer insights or related business function.
- Basic knowledge of SQL and Google Big Query.
- 1+ year of experience working with Qualtrics, Medallia or another Experience Management platform
- Hands-on experience with statistical techniques: profiling, regression analysis, trend analysis, segmentation
- Well-organized and high energy
Non-technical requirements:
- You have experience working on client-facing roles.
- You are available to work from our Bangalore office from Day 1 in a night shift. (US Shift)
- You have strong communication skills.
- You have strong analytical skills.
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
Must have skills
3 to 6 years
Data Science
SQL, Excel, Big Query - mandate 3+ years
Python/ML, Hadoop, Spark - 2+ years
Requirements
• 3+ years prior experience as a data analyst
• Detail oriented, structural thinking and analytical mindset.
• Proven analytic skills, including data analysis and data validation.
• Technical writing experience in relevant areas, including queries, reports, and presentations.
• Strong SQL and Excel skills with the ability to learn other analytic tools
• Good communication skills (being precise and clear)
• Good to have prior knowledge of python and ML algorithms


Python Data Engineer
Job Description:
• Design, develop, and maintain database scripts and procedures to support
application requirements.
• Collaborate with software developers to integrate database scripts with
application code.
• Troubleshoot and resolve database issues in a timely manner.
• Perform database maintenance tasks, such as backups, restores, and migrations.
• Implement data security measures to protect sensitive information.
• Develop and maintain documentation for database scripts and procedures.
• Stay up-to-date with emerging technologies and best practices in database
management.
Job Requirements:
• Bachelor’s degree in Computer Science, Information Technology, or related field.
• 3+ years of Proven experience as a Database Engineer or similar role with python
• Proficiency in SQL and scripting languages such as Python or Js.
• Strong understanding of database management systems, including relational
databases (e.g., MySQL, PostgreSQL, SQL Server) and NoSQL databases (e.g.,
MongoDB, Cassandra).
• Experience with database design principles and data modelling techniques.
• Knowledge of database optimisation techniques and performance tuning.
• Familiarity with version control systems (e.g., Git) and continuous integration
tools.
• Excellent problem-solving skills and attention to detail.
• Strong communication and collaboration skills.
Job Description:
We are seeking a talented Machine Learning Engineer with expertise in software engineering to join our team. As a Machine Learning Engineer, your primary responsibility will be to develop machine learning (ML) solutions that focus on technology process improvements. Specifically, you will be working on projects involving ML & Generative AI solutions for Technology & Data Management Efficiencies such as optimal cloud computing, knowledge bots, Software Code Assistants, Automatic Data Management etc
Responsibilities:
- Collaborate with cross-functional teams to identify opportunities for technology process improvements that can be solved using machine learning and generative AI.
- Define and build innovate ML and Generative AI systems such as AI Assistants for varied SDLC tasks, and improve Data & Infrastructure management etc.
- Design and develop ML Engineering Solutions, generative AI Applications & Fine-Tuning Large Language Models (LLMs) for above ensuring scalability, efficiency, and maintainability of such solutions.
- Implement prompt engineering techniques to fine-tune and enhance LLMs for better performance and application-specific needs.
- Stay abreast of the latest advancements in the field of Generative AI and actively contribute to the research and development of new ML & Generative AI Solutions.
Requirements:
- A Master's or Ph.D. degree in Computer Science, Statistics, Data Science, or a related field.
- Proven experience working as a Software Engineer, with a focus on ML Engineering and exposure to Generative AI Applications such as chatGPT.
- Strong proficiency in programming languages such as Java, Scala, Python, Google Cloud, Biq Query, Hadoop & Spark etc
- Solid knowledge of software engineering best practices, including version control systems (e.g., Git), code reviews, and testing methodologies.
- Familiarity with large language models (LLMs), prompt engineering techniques, vector DB's, embedding & various fine-tuning techniques.
- Strong communication skills to effectively collaborate and present findings to both technical and non-technical stakeholders.
- Proven ability to adapt and learn new technologies and frameworks quickly.
- A proactive mindset with a passion for continuous learning and research in the field of Generative AI.
If you are a skilled and innovative Data Scientist with a passion for Generative AI, and have a desire to contribute to technology process improvements, we would love to hear from you. Join our team and help shape the future of our AI Driven Technology Solutions.
Radisys Corporation, a global leader in open telecom solutions, enables service providers to drive disruption with new open architecture business models. Our innovative technology solutions leverage open reference architectures and standards, combined with open software and hardware, to power business transformation for the telecom industry. Our services organization delivers systems integration expertise necessary to solve complex deployment challenges for communications and content providers.
Job Overview :
We are looking for a Lead Engineer - Java with a strong background in Java development and hands-on experience with J2EE, Springboot, Kubernetes, Microservices, NoSQL, and SQL. As a Lead Engineer, you will be responsible for designing and developing high-quality software solutions and ensuring the successful delivery of projects. role with 7 to 10 years of experience, based in Bangalore, Karnataka, India. This position is a full-time role with excellent growth opportunities.
Qualifications and Skills :
- Bachelor's or master's degree in Computer Science or a related field
- Strong knowledge of Core Java, J2EE, and Springboot frameworks
- Hands-on experience with Kubernetes and microservices architecture
- Experience with NoSQL and SQL databases
- Proficient in troubleshooting and debugging complex system issues
- Experience in Enterprise Applications
- Excellent communication and leadership skills
- Ability to work in a fast-paced and collaborative environment
- Strong problem-solving and analytical skills
Roles and Responsibilities :
- Work closely with product management and cross-functional teams to define requirements and deliverables
- Design scalable and high-performance applications using Java, J2EE, and Springboot
- Develop and maintain microservices using Kubernetes and containerization
- Design and implement data models using NoSQL and SQL databases
- Ensure the quality and performance of software through code reviews and testing
- Collaborate with stakeholders to identify and resolve technical issues
- Stay up-to-date with the latest industry trends and technologies
Responsibilities -
- Collaborate with the development team to understand data requirements and identify potential scalability issues.
- Design, develop, and implement scalable data pipelines and ETL processes to ingest, process, and analyse large - volumes of data from various sources.
- Optimize data models and database schemas to improve query performance and reduce latency.
- Monitor and troubleshoot the performance of our Cassandra database on Azure Cosmos DB, identifying bottlenecks and implementing optimizations as needed.
- Work with cross-functional teams to ensure data quality, integrity, and security.
- Stay up to date with emerging technologies and best practices in data engineering and distributed systems.
Qualifications & Requirements -
- Proven experience as a Data Engineer or similar role, with a focus on designing and optimizing large-scale data systems.
- Strong proficiency in working with NoSQL databases, particularly Cassandra.
- Experience with cloud-based data platforms, preferably Azure Cosmos DB.
- Solid understanding of Distributed Systems, Data modelling, Data Warehouse Designing, and ETL Processes.
- Detailed understanding of Software Development Life Cycle (SDLC) is required.
- Good to have knowledge on any visualization tool like Power BI, Tableau.
- Good to have knowledge on SAP landscape (SAP ECC, SLT, BW, HANA etc).
- Good to have experience on Data Migration Project.
- Knowledge of Supply Chain domain would be a plus.
- Familiarity with software architecture (data structures, data schemas, etc.)
- Familiarity with Python programming language is a plus.
- The ability to work in a dynamic, fast-paced, work environment.
- A passion for data and information with strong analytical, problem solving, and organizational skills.
- Self-motivated with the ability to work under minimal direction.
- Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
Role & Responsibilities
- Create innovative architectures based on business requirements.
- Design and develop cloud-based solutions for global enterprises.
- Coach and nurture engineering teams through feedback, design reviews, and best practice input.
- Lead cross-team projects, ensuring resolution of technical blockers.
- Collaborate with internal engineering teams, global technology firms, and the open-source community.
- Lead initiatives to learn and apply modern and advanced technologies.
- Oversee the launch of innovative products in high-volume production environments.
- Develop and maintain high-quality software applications using JS frameworks (React, NPM, Node.js etc.,).
- Utilize design patterns for backend technologies and ensure strong coding skills.
- Deploy and manage applications on AWS cloud services, including ECS (Fargate), Lambda, and load balancers. Work with Docker to containerize services.
- Implement and follow CI/CD practices using GitLab for automated build, test, and deployment processes.
- Collaborate with cross-functional teams to design technical solutions, ensuring adherence to Microservice Design patterns and Architecture.
- Apply expertise in Authentication & Authorization protocols (e.g., JWT, OAuth), including certificate handling, to ensure robust application security.
- Utilize databases such as Postgres, MySQL, Mongo and DynamoDB for efficient data storage and retrieval.
- Demonstrate familiarity with Big Data technologies, including but not limited to:
- Apache Kafka for distributed event streaming.
- Apache Spark for large-scale data processing.
- Containers for scalable and portable deployments.
Technical Skills:
- 7+ years of hands-on development experience with JS frameworks, specifically MERN.
- Strong coding skills in backend technologies using various design patterns.
- Strong UI development skills using React.
- Expert in containerization using Docker.
- Knowledge of cloud platforms, specifically OCI, and familiarity with serverless technology, services like ECS, Lambda, and load balancers.
- Proficiency in CI/CD practices using GitLab or Bamboo.
- Strong knowledge of Microservice Design patterns and Architecture.
- Expertise in Authentication and authorization protocols like JWT, and OAuth including certificate handling.
- Experience working with high stream media data.
- Experience working with databases such as Postgres, MySQL, and DynamoDB.
- Familiarity with Big Data technologies related to Kafka, PySpark, Apache Spark, Containers, etc.
- Experience with container Orchestration tools like Kubernetes.
Bachelor’s Degree in Information Technology or related field desirable.
• 5 years of Database administrator experience in Microsoft technologies
• Experience with Azure SQL in a multi-region configuration
• Azure certifications (Good to have)
• 2+ Years’ Experience in performing data migrations upgrades/modernizations, performance tuning on IaaS and PaaS Managed Instance and SQL Azure
• Experience with routine maintenance, recovery, and handling failover of a databases
Knowledge about the RDBMS e.g., Microsoft SQL Server or Azure cloud platform.
• Expertise Microsoft SQL Server on VM, Azure SQL Managed Instance, Azure SQL
• Experience in setting up and working with Azure data warehouse.
Job Title Big Data Developer
Job Description
Bachelor's degree in Engineering or Computer Science or equivalent OR Master's in Computer Applications or equivalent.
Solid Experience of software development experience and leading teams of engineers and scrum teams.
4+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).
Solid Datawarehousing concepts.
Knowledge of Financial reporting ecosystem will be a plus.
4+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be an addon.
Expert on Distributed ecosystem.
Hands-on experience with programming using Core Java or Python/Scala
Expert on Hadoop and Spark Architecture and its working principle
Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data.
Experience in UNIX shell scripting.
Roles & Responsibilities
Ability to design and develop optimized Data pipelines for batch and real time data processing
Should have experience in analysis, design, development, testing, and implementation of system applications
Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows.
Excellent technical and analytical aptitude
Good communication skills.
Excellent Project management skills.
Results driven Approach.
Mandatory SkillsBig Data, PySpark, Hive
Minimum of 8 years of experience of which, 4 years should be of applied data mining
experience in disciplines such as Call Centre Metrics.
Strong experience in advanced statistics and analytics including segmentation, modelling, regression, forecasting etc.
Experience with leading and managing large teams.
Demonstrated pattern of success in using advanced quantitative analytic methods to solve business problems.
Demonstrated experience with Business Intelligence/Data Mining tools to work with
data, investigate anomalies, construct data sets, and build models.
Critical to share details on projects undertaken (preferably on telecom industry)
specifically through analysis from CRM.
Role: Oracle DBA Developer
Location: Hyderabad
Required Experience: 8 + Years
Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration
Roles and Responsibilities:
1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra
2. DBA experience in a SRE environment will be an advantage.
3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)
4. Analyze solutions and implement best practices for cloud database and their components.
5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.
7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.
8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)
9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.
10. Recommend query and schema changes to optimize the performance of database queries.
11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.
12. Have experience with cloud database such as SQL server, Oracle, Cassandra
13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)
14. Have excellent written and verbal English communication skills.
15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.
16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.
17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.
18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.
19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.
20. Bachelor's Degree in a technical discipline required.
21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)
21. Cloud, DevOps certifications will be an advantage.
Must have Skills:
Ø Oracle DBA with development
Ø SQL
Ø Devops tools
Ø Cassandra
About Us:
6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do
sales and marketing. It works with big data at scale, advanced machine learning and
predictive modelling to find buyers and predict what they will purchase, when and
how much.
6sense helps B2B marketing and sales organizations fully understand the complex ABM
buyer journey. By combining intent signals from every channel with the industry’s most
advanced AI predictive capabilities, it is finally possible to predict account demand and
optimize demand generation in an ABM world. Equipped with the power of AI and the
6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,
and engage buyers to drive more revenue.
6sense is seeking a Staff Software Engineer and data to become part of a team
designing, developing, and deploying its customer-centric applications.
We’ve more than doubled our revenue in the past five years and completed our Series
E funding of $200M last year, giving us a stable foundation for growth.
Responsibilities:
1. Own critical datasets and data pipelines for product & business, and work
towards direct business goals of increased data coverage, data match rates, data
quality, data freshness
2. Create more value from various datasets with creative solutions, and unlocking
more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble
large, complex data sets that meet functional and non-functional business
requirements
4. Improving our current data pipelines i.e. improve their performance, SLAs,
remove redundancies, and figure out a way to test before v/s after roll out
5. Identify, design, and implement process improvements in data flow across
multiple stages and via collaboration with multiple cross functional teams eg.
automating manual processes, optimising data delivery, hand-off processes etc.
6. Work with cross function stakeholders including the Product, Data Analytics ,
Customer Support teams for their enablement for data access and related goals
7. Build for security, privacy, scalability, reliability and compliance
8. Mentor and coach other team members on scalable and extensible solutions
design, and best coding standards
9. Help build a team and cultivate innovation by driving cross-collaboration and
execution of projects across multiple teams
Requirements:
8-10+ years of overall work experience as a Data Engineer
Excellent analytical and problem-solving skills
Strong experience with Big Data technologies like Apache Spark. Experience with
Hadoop, Hive, Presto would-be a plus
Strong experience in writing complex, optimized SQL queries across large data
sets. Experience with optimizing queries and underlying storage
Experience with Python/ Scala
Experience with Apache Airflow or other orchestration tools
Experience with writing Hive / Presto UDFs in Java
Experience working on AWS cloud platform and services.
Experience with Key Value stores or NoSQL databases would be a plus.
Comfortable with Unix / Linux command line
Interpersonal Skills:
You can work independently as well as part of a team.
You take ownership of projects and drive them to conclusion.
You’re a good communicator and are capable of not just doing the work, but also
teaching others and explaining the “why” behind complicated technical
decisions.
You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll
want you to evolve with it
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
● Work in a Business Process Outsourcing (BPO) Module providing marketing solutions to different local and international clients and Business Development Units.
● Content Production: Creating content to support demand generation initiatives, and grow brand awareness in a competitive category of an online casino company.
● Writing content for different marketing channels (such as website, blogs, thought leadership pieces, social media, podcasts, webinar, etc.) as assigned to effectively reach the desired target players and marketing goals.
● Data Analysis: Analyze player data to identify trends, improve the player experience, and make data-driven decisions for the casino's operations.
● Research and use AI-based tools to improve and speed up content creation processes.
● Researching content and consumer trends to ensure that content is relevant and appealing.
● Help develop and participate in market research for the purposes of thought leadership content production and opportunities, and competitive intelligence for content marketing.
● Security: Maintain a secure online environment, including protecting player data and preventing cyberattacks.
● Managing content calendars (and supporting calendar management) and ensuring the content you write is consistent with brand standards and meets the brief as-assigned.
● Coordinating with project manager / content manager to ensure the timely delivery of assignments.
● Keeping up to date with content trends, consumer preferences, and advancements in technology.
● Reporting: Generate regular reports on key performance indicators, financial metrics, and operational data to assess the casino's performance.
The specific responsibilities and requirements for a Marketing Content Supervisor/Manager in an online casino may vary depending on the size and nature of the casino, as well as local regulations and industry standards.
Salary
Php80, 000 - Php100, 000
INR 117,587.- 146,960
Work Experience Requirements
Essential Qualifications
● Excellent research, writing, editing, proofreading, content creation and communication skills.
● Proficiency/experience in formulating corporate/brand/product messaging.
● Strong understanding of SEO and content practices.
● Proficiency in MS Office, Zoom, Slack, marketing platforms related to creative content creation/ project management/ workflow.
● Content writing / copywriting portfolio demonstrating scope of content/copy writing capabilities and application of writing and SEO best practices.
● Highly motivated, self-starter, able to prioritize projects, accept responsibility and follow through without close supervision on every step.
● Demonstrated strong analytical skills with an action-oriented mindset focused on data-driven results.
● Experience in AI-based content creation tools is a plus. Openness to research and use AI tools required.
● Passion for learning and self-improvement.
● Detail-oriented team player with a positive attitude.
● Ability to embrace change and love working in dynamic, growing environments.
● Experience with research, content production, writing on-brand and turning thought pieces into multiple content assets by simplifying complex concepts preferred.
● Ability to keep abreast of content trends and advancements in content strategies and technologies.
● On-camera or on-mic experience or desire to speak and present preferred.
● Must be willing to report onsite in Cambodia
Looking for freelance?
We are seeking a freelance Data Engineer with 7+ years of experience
Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems
What we are looking for
We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes
- The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
- Strong in scripting languages like Python, Scala
- 5+ years of hands-on experience with one or more of these data integration/ETL tools.
- Experience building on-prem data warehousing solutions.
- Experience with designing and developing ETLs, Data Marts, Star Schema
- Designing a data warehouse solution using Synapse or Azure SQL DB
- Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
- Understanding of integration run times available in Azure.
- Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database


Experience:
Should have a minimum of 10-12 years of Experience.
Should have experience on Product Development/Maintenance/Production Support experience in a support organization
Should have a good understanding of services business for fortune 1000 from the operations point of view
Ability to read, understand and communicate complex technical information
Ability to express ideas in an organized, articulate and concise manner
Ability to face stressful situation with positive attitude
Any certification in regards to support services will be an added advantage
Education: BE, B- Tech (CS), MCA
Location: India
Primary Skills:
Hands on experience with OpenStack framework. Ability to set up private cloud using OpenStack environment. Awareness to various OpenStack services and modules
Strong experience with OpenStack services like Neutron, Cinder, Keystone, etc.
Proficiency in programming languages such as Python, Ruby, or Go.
Strong knowledge of Linux systems administration and networking.
Familiarity with virtualization technologies like KVM or VMware.
Experience with configuration management and IaC tools like Ansible, Terraform.
Subject matter expertise in OpenStack security
Solid experience with Linux and shell scripting
Sound knowledge of cloud computing concepts & technologies, such as docker, Kubernetes, AWS, GCP, Azure etc.
Ability to configure OpenStack environment for optimum resources
Good knowledge of security, operations in open stack environment
Strong knowledge of Linux internals, networking, storage, security
Strong knowledge of VMware Enterprise products (ESX, vCenter)
Hands on experience with HEAT orchestration
Experience with CI/CD, monitoring, operational aspects
Strong experience working with Rest API's, JSON
Exposure to Big data technologies ( Messaging queues, Hadoop/MPP, NoSQL databases)
Hands on experience with open source monitoring tools like Grafana/Prometheus/Nagios/Ganglia/Zabbix etc.
Strong verbal and written communication skills are mandatory
Excellent analytical and problem solving skills are mandatory
Role & Responsibilities
Advise customers and colleagues on cloud and virtualization topics
Work with the architecture team on cloud design projects using openstack
Collaborate with product, customer success, and presales on customer projects
Participate in onsite assessments and workshops when requested
Provide subject matter expertise and mentor colleagues
Set up open stack environments for projects
Design, deploy, and maintain OpenStack infrastructure.
Collaborate with cross-functional chapters to integrate OpenStack with other services (k8s, DBaaS)
Develop automation scripts and tools to streamline OpenStack operations.
Troubleshoot and resolve issues related to OpenStack services.
Monitor and optimize the performance and scalability of OpenStack components.
Stay updated with the latest OpenStack releases and contribute to the OpenStack community.
Work closely with Architects and Product Management to understand requirement
should be capable of working independently & responsible for end-to-end implementation
Should work with complete ownership and handle all issues without missing SLA's
Work closely with engineering team and support team
Should be able to debug the issues and report appropriately in the ticketing system
Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions
Should be able to debug/create scripts for automation
Should be able to configure monitoring utilities & set up alerts
Should be hands on in setting up OS, applications, databases and have passion to learn new technologies
Should be able to scan logs, errors, exception and get to the root cause of the issue
Contribute in developing a knowledge base on collaboration with other team members
Maintain customer loyalty through Integrity and accountability
Groom and mentor team members on project technologies and work


Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:
- Bachelor’s or Master’s degree in Computer Science or equivalent industry technical skills
- Mandatory In-depth knowledge and strong experience in Python programming language.
- Expertise and significant work experience in Python with Fastapi and Async frameworks.
- Prior experience building Microservice and/or modular monolith.
- Should be an expert Object Oriented Programming and Design Patterns.
- Has knowledge and experience with SQLAlchemy/ORM, Celery, Flower, etc.
- Has knowledge and experience with Kafka / RabbitMQ, Redis.
- Experience in Postgres/ Cockroachdb.
- Experience in MongoDB/DynamoDB and/or Cassandra are added advantages.
- Strong experience in either AWS services (e.g, EC2, ECS, Lambda, StepFunction, S3, SQS, Cognito). and/or equivalent Azure services preferred.
- Experience working with Docker required.
- Experience in socket.io added advantage
- Experience with CI/CD e.g. git actions preferred.
- Experience in version control tools Git etc.
This is one of the early positions for scaling up the Technology team. So culture-fit is really important.
- The role will require serious commitment, and someone with a similar mindset with the team would be a good fit. It's going to be a tremendous growth opportunity. There will be challenging tasks. A lot of these tasks would involve working closely with our AI & Data Science Team.
- We are looking for someone who has considerable expertise and experience on a low latency highly scaled backend / fullstack engineering stack. The role is ideal for someone who's willing to take such challenges.
- Coding Expectation – 70-80% of time.
- Has worked with enterprise solution company / client or, worked with growth/scaled startup earlier.
- Skills to work effectively in a distributed and remote team environment.
About Quadratyx:
We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.
We firmly believe in Excellence Everywhere.
Job Description
Purpose of the Job/ Role:
• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.
Key Requisites:
• Expertise in Data structures and algorithms.
• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.
• Scaling of cloud-based infrastructure.
• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.
• Led and mentored a team of data engineers.
• Hands-on experience in test-driven development (TDD).
• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.
• Good knowledge of Kafka and Spark Streaming internal architecture.
• Good knowledge of any Application Servers.
• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.
• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc.
Skills/ Competencies Required
Technical Skills
• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.
• Clear end-to-end experience in designing, programming, and implementing large software systems.
• Passion and analytical abilities to solve complex problems Soft Skills.
• Always speaking your mind freely.
• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.
• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.
Academic Qualifications & Experience Required
Required Educational Qualification & Relevant Experience
• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.
• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
- Parse data using Python, create dashboards in Tableau.
- Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
- Migrate Datastage jobs to Snowflake, optimize performance.
- Work with HDFS, Hive, Kafka, and basic Spark.
- Develop Python scripts for data parsing, quality checks, and visualization.
- Conduct unit testing and web application testing.
- Implement Apache Airflow and handle production migration.
- Apply data warehousing techniques for data cleansing and dimension modeling.
Requirements:
- 4+ years of experience as a Platform Engineer.
- Strong Python skills, knowledge of Tableau.
- Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
- Proficient in Unix Shell Scripting and SQL.
- Familiarity with ETL tools like DataStage and DMExpress.
- Understanding of Apache Airflow.
- Strong problem-solving and communication skills.
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
Qualifications & Experience:
▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design
▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.
▪ Expert in SQL, worked on advanced SQL for at least 2+ years
▪ Good development skills in Java, Python or other languages
▪ Experience with EMR, S3
▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview
▪ Comfortable working in an agile environment
- KSQL
- Data Engineering spectrum (Java/Spark)
- Spark Scala / Kafka Streaming
- Confluent Kafka components
- Basic understanding of Hadoop

Role: Principal Software Engineer
We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.
Responsibilities:
• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule
• Software Development that creates data driven intelligence in the products which deals with Big Data backends
• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements
• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development
• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)
• Creating metrics and evaluation of algorithm for better accuracy and recall
• Ensuring efficient access and usage of data through the means of indexing, clustering etc.
• Collaborate with engineering and product development teams.
Requirements:
• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school
• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.
• Experience of 8 to 10 year with product development, having done algorithmic work
• 5+ years of experience working with large data sets or do large scale quantitative analysis
• Understanding of SaaS based products and services.
• Strong algorithmic problem-solving skills
• Able to mentor and manage team and take responsibilities of team deadline.
Skill set required:
• In depth Knowledge Python programming languages
• Understanding of software architecture and software design
• Must have fully managed a project with a team
• Having worked with Agile project management practices
• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)
• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis
Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.
• 4+ years of work experience in test planning and automation of enterprise software
• Expertise in programming using Java or Python and other scripting languages.
• Experience with one or more public clouds is expected.
• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins
. • Experience with performance and scalability testing tools.
• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.
Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS
RequiredSkills:
• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.
• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.


KEY RESPONSIBILITIES
- Building a website based on the given requirements and ensure it’s successfully deployed
- Responsible for designing, planning, and testing new web pages and site features
- A propensity for brainstorming and coming up with solutions to open-ended problems
- Work closely with other teams, and project managers, to understand all stakeholders’ requirements and ensure that all specifications and requirements are met in final development
- Troubleshoot and solve problems related to website functionality
- Takes ownership of initiatives and drives them to completion.
- Desire to learn and dive deep into new technologies on the job, especially around modern data storage and streaming open source systems
- Responsible for creating, optimizing, and managing REST APIs
- Create website content and enhance website usability and visibility
- Ensure cross-browser compatibility and testing for mobile responsiveness
- Ability to integrate payment processing and search functionality software solutions
- Stay up-to-date with technological advancements and the latest coding practices
- Collaborate with the team of designers, content managers, and developers to determine site goals, functionality, and layout
- Monitor website traffic and overall system’s health with Google analytics to ensure high GTmetrix score
- Build the front-end of applications through appealing visual design
- Design client-side and server-side architecture
- Develop server-side logic and APIs that integrate with front-end applications.
- Architect and design complex database structures and data models.
- Develop and implement backend systems to support scalable and high-performance web applications.
- Create automated tests to ensure system stability and performance.
- Ensure security and data privacy measures are maintained throughout the development process.
- Maintain an up-to-date changelog for all new, updated, and fixed changes.
- Ability to document and manage all the software design, requirements, reusable & transferable code, and other technical aspects of the project.
- Create and convert storyboards and wireframes into high-quality full-stack code
- Write, execute, and maintain clean, reusable, and scalable code
- Design and implement low-latency, high-availability, and performant applications
- Implement security and data protection
- Ensure code that is platform and device-agnostic
EDUCATION & SKILLS REQUIREMENT
- B.Tech. / BE / MS degree in Computer Science or Information Technology
- Expertise in MERN stack (MongoDB, Express.js, React.js, Node.js)
- Should have prior working experience of at least 3 years as web developer or full stack developer
- Should have done projects in e-commerce or have preferably worked with companies operating in e-commerce
- Should have expert-level knowledge in implementing frontend technologies
- Should have worked in creating backend and have deep understanding of frameworks
- Experience in the complete product development life cycle
- Hands-on experience with JavaScript, HTML, CSS, JQuery, JSON, PHP, XML
- Proficiency in databases, including analytical (e.g., mySQL, MongoDB, PostgreSQL, DynamoDB, Redis, Hive, Elastic etc.)
- Knowledge of architecting or implementing search APIs
- Great understanding of data modeling and RESTful APIs
- Strong knowledge of CS fundamentals, data structures, algorithms, and design patterns
- Strong analytical, consultative, and communication skills
- Excellent understanding of Microsoft office tools : excel, word, powerpoint etc.
- Excellent organizational and time management skills
- Experience with responsive and adaptive design (Web, Mobile & App)
- Should be a self starter and have ability to work without being supervised
- Excellent debugging and optimization skills
- Experience building high throughput/low latency systems.
- Knowledge of big data systems such as Cassandra, Elastic, Kafka, Kubernetes, and Docker
- Should be willing to be a part of a small team and working in fast-paced environment
- Should be highly passionate about building products that create a significant impact.
- Should have experience in user experience design, website optimization techniques and different PIM tools
Job description
- Engage with the business team and stakeholder at different levels to understand business needs, analyze, document, prioritize the requirements, and make recommendations on the solution and implementation.
- Delivering the product that meets business requirements, reliability, scalability, and performance goals
- Work with Agile scrum team and create the scrum team strategy roadmap/backlog, develop minimal viable product and Agile user stories that drive a highly effective and efficient project development and delivery scrum team.
- Work on Data mapping/transformation, solution design, process diagram, acceptance criteria, user acceptance testing and other project artifacts.
- Work effectively with the technical/development team and help them understand the specifications/requirements for technical development, testing and implementation.
- Ensure solutions promote simplicity, efficiency, and conform to enterprise and architecture standards and guidelines.
- Partner with the support organization to provide training, support and technical assistance to operation team and end users as necessary
- Product/Application Developer
- Designs and develops software applications based on user requirements in a variety of coding environments such as graphical user interface, database query languages, report writers, and specific development languages
- Consult on the use and implementation of software products and applications and specialize in the business development environment, including the selection of development tools and methodology
Primary / Mandatory skills:
- Overall Experience: Overall 4 to 6 years of IT development experience
- Design and Code NodeJS based Microservices, API Webservices, NoSql technologies (Cassandra/MongoDb)
- Expert in developing code for Node-JS based Microservice in TypeScript
- Good Experience in understanding the data Transmission through pug/sub mechanism like Event Hub and Kafka
- Good Understanding of Analytics and clickstream data capture is HUGE Plus
- Good understanding of frameworks like Java Spring Boot, Python is preferred
- Good understanding of Microsoft Azure principles and services is preferred
- Able to write Unit test cases
- Familiarity with performance testing tools such as Akamai SOASTA is preferred
- Good knowledge on Source Code control like GIT, code clout, etc and understanding of CI/CD(Jenkins and Kubernetes)
- Solid technical background with understanding and/or experience in software development and web technologies
- Strong analytical skills and the ability to convert consumer insights and performance data into high impact initiatives
- Experience working within scaled agile development team
- Excellent written and verbal communication skills with demonstrated ability to present complex technical information in a clear manner to peers, developers, and senior leaders
- The desire to be continually learning about emerging technologies/industry trends
We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:
- Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
- Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
- Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
- Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
- Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork
Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).