
1. Solid Databricks & pyspark experience
2. Must have worked in projects dealing with data at terabyte scale
3. Must have knowledge of spark optimization techniques
4. Must have experience setting up job pipelines in Databricks
5. Basic knowledge of gcp and big query is required
6. Understanding LLMs and vector db

Similar jobs
Who We Are
Nudge enables consumer companies to personalize shopper journeys without depending on developer bandwidth. We believe the future of commerce lies in enabling every brand to create dynamic, personalized experiences at scale. Our SDKs power these experiences, letting companies configure UI/UX directly from our dashboard without additional engineering work.
We’re backed by leading global investors and are building a team in Bangalore that thrives on energy, curiosity, and ownership. We value strong opinions held loosely: we love people who care deeply about their craft, but who are flexible and collaborative when building together.
The Opportunity
We are looking for a founding Frontend Engineer with a strong foundation in Javascript and web technologies. In this role, you will design and maintain our Dashboards are web SDKs that are used by consumer apps. You’ll be solving real challenges of scale, performance, and security.
At Nudge, engineers own the product end-to-end. You won’t just receive PRDs to execute on, you’ll play an active role in shaping the product roadmap, influencing technical and product decisions, and ensuring what we build truly serves our customers.
What You’ll Do
- Lead development of our customer-facing interfaces: Dashboard, Chrome Extension, and Web SDK.
- Collaborate with product and design from idea to implementation - owning features end-to-end.
- Write modular, maintainable code that scales across use cases and user types.
- Shape our frontend architecture and tech stack as we scale.
- Continuously improve performance—through bundle optimization, caching, or efficient state management.
- Use TypeScript, React, Next.js, Redux Toolkit, React Query, and browser storage tools (localStorage, sessionStorage, cookies) to build seamless, fast experiences.
What We’re Looking For
- 3–5 years of professional experience in web development.
- Experience building or maintaining SDKs, libraries, or frameworks that ship to external developers.
- Strong understanding of fundamentals, internals, and system design.
- Proficiency in networking (HTTP, WebSockets) and local data persistence.
- Experience working with large language models (LLMs) in production.
- Familiarity with LangChain or Vercel AI SDK.
- Experience with server-driven UI and dynamic component rendering is a strong plus.

Find below the JD for the opening of Senior Micro FrontEnd Engineer - TechMango
Role : Senior Micro FrontEnd Engineer
Experience : 5 - 10 Years
Job Location : Madurai
Mode: Remote (Initial 1 month in Office)
Mandatory skills : Micro Frontend, Angular, HTML, CSS, Javascript, , Module federation, Web component JS
Job Description :
As a Micro Frontend Angular Engineer, your role is crucial in shaping our innovative SaaS platform to ensure its user-friendliness and accessibility. By focusing on user interface performance and design consistency, nurturing team growth, prioritizing the product, and fostering knowledge sharing, you will drive innovation and guarantee a top-tier user experience. Your work will directly influence the look and feel of our product, making it more intuitive and enjoyable for our customers. Your contribution will be vital to our continued success.
THIS IS YOU:
- Extensive experience with Angular and TypeScript. You are comfortable with the ins and outs of these technologies or related, from building and deploying applications to debugging and optimizing performance.
- A passion for shipping. Bonus points for Trunk Based Development. CI/CD is your default. Putting code live every day is standard practice.
- Familiarity with module federation and cloud environments. You understand how to build and deploy micro frontends using module federation and how to optimize performance and scalability in cloud environments.
- Expertise in developing and maintaining component libraries. You have experience in building and maintaining component libraries that can be used across multiple applications, ensuring consistency, and reducing development time.
- User-centric approach to frontend development. You prioritize user needs and preferences when developing front-end interfaces, working closely with product designers to create intuitive and engaging user experiences. Attention to the details when implementing UI.
- A Best tool for the job mentality. You are not a zealot and know that having a hammer does not make everything a nail. You are not afraid to try something new and know how to build consensus and knowledge in the team for new tech and concepts.
- Excellent English communication skills. You can naturally work with people from different backgrounds, both technical and non-technical. You are comfortable defending your ideas and challenging others. People enjoy working and debating with you.
Mandate Skills: Micro Frontend, Angular and JavaScript/TypeScript,, Module federation/Web component JS
Total Experience:
Relevant Experience in Micro frontend:
Exp in Angular:
Exp in Module federation/Web component JS:
Excellent Communication Required so rate yourself out of 5:
Current Company:
Current Location:
Preferred Location:
CCTC:
ECTC:
Notice Period:
Are you able to join us with in 15-20 Days Yes/No:
If You're serving notice Pls mention the Last working date:
Marital Status & Native Location:
Are you Open to travel Madurai for 1 Week during your Onboarding time Yes/No:
If You are Holding any offer, pls mention the CTC & Job Location:
Available for Virtual Interview on Weekdays Yes/No:
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
As part of our role at a leading global insurance company, we are responsible for developing and managing applications.
To reinforce our dynamic development team, we are seeking a skilled Full-stack developer. As a Full-stack developer, you will collaborate with an international cross-functional teams to design, develop, and deploy high-quality software solution.
Responsibilities:
Design, develop, and maintain the application
Write clean, efficient, and reusable code
Implement new features and functionality based on business requirements
Participate in system and application architecture discussions
Create technical designs and specifications for new features or enhancements
Write and execute unit tests to ensure code quality
Debug and resolve technical issues and software defects
Conduct code reviews to ensure adherence to best practices
Identify and fix vulnerabilities to ensure application integrity
Working with frontend developers to ensure seamless integration of user-facing elements
Collaborating with DevOps teams for deployment and scaling
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Insurance domain is appreciated.
Strong experience with Terraform, Java (Spring Boot), AWS, GitLab and Angular, NGXS, State Management, Typescript
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Terraform, Java, Spring Boot, AWS, GitLab, Angular, NGXS, State Management, TypeScript, Unit Testing, Debugging, Code Review, Software Architecture, DevOps Collaboration, Frontend Integration, Technical Design, Problem-Solving, Communication, Teamwork.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Job Description:-
Infrastructure as Code (IaC): Implement and maintain infrastructure as code using tools like Terraform or AWS CloudFormation, ensuring scalability and reliability of cloud resources.
Continuous Integration/Continuous Deployment (CI/CD): Establish and enhance CI/CD pipelines to automate build, test, and deployment processes, facilitating faster and more reliable releases.
Configuration Management: Manage and automate server configurations using tools like Ansible, Chef, or Puppet, ensuring consistency and compliance across environments.
Monitoring and Logging: Set up and maintain monitoring and logging solutions (e.g., CloudWatch, ELK stack) to proactively identify and troubleshoot issues within the AWS infrastructure.
Security and Compliance: Implement security best practices and compliance standards (e.g., AWS IAM, Security Groups) to protect data and resources. Conduct security audits and vulnerability assessments.
The Successful Applicant
AWS Proficiency: Extensive experience with Amazon Web Services (AWS) and its services, including EC2, S3, RDS, Lambda, and more. AWS Certified DevOps Engineer certification is a plus.
Infrastructure as Code (IaC): Proficiency in writing and maintaining infrastructure as code using tools like Terraform, AWS CloudFormation, or similar technologies.
CI/CD Tools: Strong knowledge of CI/CD tools such as Jenkins, Travis CI, or AWS CodePipeline. Ability to configure and optimize automated build and deployment pipelines.
Containerization: Hands-on experience with containerization technologies like Docker and container orchestration platforms like Kubernetes in AWS.
Scripting and Automation: Proficiency in scripting languages (e.g., Bash, Python) for automating tasks and managing infrastructure.
Monitoring and Logging Tools: Familiarity with monitoring and logging tools such as CloudWatch, ELK stack (Elasticsearch, Logstash, Kibana), Prometheus, Grafana, or Splunk.
Security and Compliance: Strong understanding of AWS security best practices, identity and access management (IAM), and compliance frameworks (e.g., HIPAA, GDPR).
Networking: Knowledge of AWS networking concepts, VPC, and routing. Experience with VPNs, Direct Connect, and other networking technologies is a plus.
Database Administration: Basic knowledge of database administration on AWS, including RDS, DynamoDB, or Aurora.
Collaboration and Communication: Excellent communication and teamwork skills to collaborate with cross-functional teams and convey technical information effectively.
- Understanding product features, preparing technical documentation.
- Take ownership over project timelines and deliverables.
- Contribute to the design, architecture and build of our Core Platform.
- Help scale the platform and build out new features.
- Design and develop high-volume, low-latency applications for mission-critical systems, delivering high-availability and performance.
- Self-directed, but also work well with other engineers.
- Write well designed, testable, efficient code.
- Anticipate future technical needs and craft plans to realize them, and balance feature development with investments in tech debt reduction.
- The ability to combine speed and quality and that you can achieve both at same time.
- Excited about working for a startup and moving quickly.
- Experience in team handling will be a plus.
- Work with customers for understanding requirements whenever needed.
- BS/MS in Computer Science or Engineering.
- 8+ years of experience in software development in an object-oriented language such as Java, .NET or Node.Js
- Exceptional design, coding, and problem-solving skills, with a bias for architecture at scale.
- Experience with HTML5, JavaScript, TypeScript, front-end technologies like AngularJS, Redux / React and upcoming web technologies.
- Real-world experience developing large scale commercial services with robust performance, resiliency, and telemetry, delivered both Online and OnPrem.
- Strong knowledge of computer science, algorithms, and design patterns.
- Ability to appreciate complex problems with a thorough design.
Responsibilities:
- Implement effective sourcing, screening, and interviewing techniques
- Developing fair HR policies and ensuring employees understand and comply with them
- Manage employee relations, employee engagement, appraisals, PF, ESI, etc.
- Act as the point of contact regarding labor legislation issues
- Manage employees’ grievances
- Create and run referral bonus programs
- Oversee daily operations of the HR department
Requirements:
- Must have at least 2+ years experience in Human Resource in manufacturing/industrial company.
- Proven work experience as an HR Executive, HR Manager or similar role
- Familiarity with Human Resources Management Systems and Applicant Tracking Systems
- Experience with full-cycle recruiting
- Good knowledge of labor legislation (particularly employment contracts, employee leaves, and insurance)
- Demonstrable leadership abilities
- Solid communication skills
- BSc/MSc in Human Resources Management or relevant field








