19+ AWS Lambda Jobs in Chennai | AWS Lambda Job openings in Chennai
Apply to 19+ AWS Lambda Jobs in Chennai on CutShort.io. Explore the latest AWS Lambda Job opportunities across top companies like Google, Amazon & Adobe.
Job Overview:
We are looking for an experienced Senior Full Stack Developer with expertise in building scalable web applications using React, Next.js, Node.js, NestJS, and TypeScript. You should have experience working with both NoSQL (MongoDB) and SQL databases (MySQL, PostgreSQL), as well as a strong understanding of AWS services. Experience in integrating third-party APIs is a plus. This role will involve working on full-stack development, designing efficient architectures, and delivering high-quality solutions for complex, data-driven applications.
Responsibilities:
- Design, develop, and maintain full-stack web applications using React, Next.js, Node.js, NestJS, and TypeScript.
- Architect and manage databases including MongoDB, MySQL, and PostgreSQL for high-performance applications.
- Deploy and maintain applications on AWS, leveraging services like EC2, Lambda, S3, RDS, and more.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop scalable APIs and integrate third-party services and custom APIs.
- Ensure the performance, security, and responsiveness of applications.
- Write clean, maintainable, and efficient code, adhering to best practices.
- Lead and mentor junior developers, participate in code reviews, and contribute to architectural decisions.
- Troubleshoot, debug, and optimize application performance.
Requirements:
- 4+ years of experience in full-stack development.
- Strong expertise in React, Next.js, Node.js, NestJS, and TypeScript.
- Proficient with NoSQL (MongoDB) and SQL databases (MySQL, PostgreSQL).
- Experience with cloud services, particularly AWS (EC2, Lambda, S3, etc.).
- Knowledge of RESTful APIs and GraphQL.
- Experience with CI/CD pipelines and automated deployment practices.
- Familiarity with Docker and containerized applications.
- Knowledge of Git and version control.
- Experience with custom third-party API integrations is a plus.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Experience with testing frameworks like Jest or Mocha.
- Familiarity with DevOps practices and infrastructure as code (IaC).
- Knowledge of microservices architecture.
- Exposure to front-end UI/UX frameworks and libraries like TailwindCSS.
DevOps Lead Engineer
We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.
Essential Requirements (must have):
• Bachelor's degree preferable in Engineering.
• Solid 5+ experience with AWS, DevOps, and related technologies
Skills Required:
Cloud Performance Engineering
• Performance scaling in a Micro-Services environment
• Horizontal scaling architecture
• Containerization (such as Dockers) & Deployment
• Container Orchestration (such as Kubernetes) & Scaling
DevOps Automation
• End to end release automation.
• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.
• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.
• Strong scripting knowledge
• Strong analytical and problem-solving skills.
• Cloud and On-prem deployments
Infrastructure Design & Provisioning
• Infra provisioning.
• Infrastructure Sizing
• Infra Cost Optimization
• Infra security
• Infra monitoring & site reliability.
Job Responsibilities:
• Responsible for creating software deployment strategies that are essential for the successful
deployment of software in the work environment and provide stable environment for delivery of
quality.
• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing
automation systems that help to execute business web and data infrastructure platforms.
• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,
and maintaining configuration management.
• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are
accountable for conducting training sessions for the juniors in the team, mentoring, career
support. They are also answerable for the architecture and technical leadership of the complete
DevOps infrastructure.
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.
2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.
Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.
Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.
Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.
Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.
Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.
Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.
Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.
Experience or exposure to design and development using Full Stack tools.
Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.
Bachelor's or master's degree in computer science or related field.
Confidential
Strategic Vendor Alliance – AWS Practice Lead and Overall P&L owner for India Business and Drive Profitability.
A) Managing the AWS Practice for Clients and executing the strategic business plan for the company and channel partners ecosystem with regards to different Services of AWS.
B) Built key relationships with various segment leaders in AWS from the Commercial and Public Sector and create Client led AWS Solutions for Partners to make simplifying cloud approach for customers.
C) Building a predictable pipeline of joint opportunities via differentiated proposition to the customers and Partners by working with AWS on a unique offerings specifically to Client.
D) Drive SMB focus approach for AWS which contributes 50%of Clients overall Business using ready to use cloud bundles for various workloads
E) Own the relationship map, drive and monitor cadence meetings, both with Internal Sellers and Channel Partners measured by different parameters on Incremental growth ,customer acquisition, partners onboarding, Manage Services led approach ,migration and deployment ,success of each GTM
F) Lead a Team of Product Mangers (who will drive specific GTM like SAP on AWS , SMB Scale Drive, Education focus,Cloud Front Drive- CDN bundle , Strategic Workloads like Microsoft Workload,DC Migration ,DR on Cloud)
G) Managing Partner Profitability metrics by creating various different avenues from recurring resale consumption and services led engagement for partners.
H) Worked on Long Term direction of the company business plan to drive Incremental Growth.
I)Collaborate with internal peers within the company to build cloud business model,how to attach manage services to various services of the Hyperscaler in building the framework.
Strong in Basic C++, STL, Linux
OOPs, Exception Handling
Design Pattern and Solid principles, concepts related to UML representation
• Solution, design, and architecture concepts
• Knowledge on Pointers and smart Pointers.
• IO streams, Files and Streams and Lambda Expressions in C++ added advantage.
• Features of C++17 and usage of STL in C++ is added advantage.
• Templates in C++.
Communication skill, Attitude, learnability
at Altimetrik
Location: Chennai, Pune,Banglore,jaipurExp: 5 yrs to 8 yrs
- Implement best practices for the engineering team across code hygiene, overall architecture design, testing, and deployment activities
- Drive technical decisions for building data pipelines, data lakes, and analyst access.
- Act as a leader within the engineering team, providing support and mentorship for teammates across functions
- Bachelor’s Degree in Computer Science or equivalent job experience
- Experienced developer in large data environments
- Experience using Git productively in a team environment
- Experience with Docker
- Experience with Amazon Web Services
- Ability to sit with business or technical SMEs to listen, learn and propose technical solutions to business problems
· Experience using and adapting to new technologies
· Take and understand business requirements and goals
· Work collaboratively with project managers and stakeholders to make sure that all aspects of the project are delivered as planned
· Strong SQL skills with MySQL or PostgreSQL
- Experience with non-relational databases and their role in web architectures desired
Knowledge and Experience:
- Good experience with Elixir and functional programming a plus
- Several years of python experience
- Excellent analytical and problem-solving skills
- Excellent organizational skills
Proven verbal and written cross-department and customer communication skills
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
2. Strong knowledge in Nodejs, Javascript & Typescript
3.Develop and maintain all server-side components.
4.Develop high-performance and scalable APIs to serve clients.
5.Collaborate with front-end developers on the integration of well written APIs.
6.Implement effective security protocols, data protection measures, and storage solutions.
7.Investigate issues by reviewing/debugging code, provide fixes and workarounds, and review
changes for operability to maintain existing software solutions.
8.Develop and manage well-functioning databases and applications
9.Run diagnostic tests, repairing defects, and provide technical support.
10.Be our NodeJS champion by keeping an eye out for emerging technologies and recommending
improvements.
11. You will work within a team, collaborate and add value through participation in peer code
reviews, provide comments and suggestions, and work with cross functional teams to achieve
goals.
12. Design back end services for various business processes
13. You will assume technical accountability for your specific work products within an application
and provide technical support during solution design for new requirements.
14. Design server side architecture
client of peoplefirst consultants
SKILLS: UI Development,Angular,Javascript,HTML,CSS,Monitering Management,etc.,
Role
We are looking for a Lead UI Engineer to design and build multiple channels for user interaction.
Responsibilities
- Design/Architect and develop core features of the UI/frontend.
- Design/Architect and develop policy framework to provide rich UI for various features of the solution.
Requirements:
- Must have 6+ years of experience in working in frontend technologies
- Must have at least 2-3 years. of experience with Angular2 or above versions.
- Must have at least 5+ years of experience with HTML/CSS/JavaScript/TypeScript.
- Experience with PrimeNG and Vega is a plus
- Strong background in developing UI for Monitoring and Management systems, dealing with topology, and different telemetry such as metrics, traces and logs
- Familiar with containerization solutions like Docker/Kubernetes etc.
- Familiar with serverless technologies like AWS Lambda.
- B.E/B.Tech/MS degree in Computer Science, or equivalent
Reputed MNC client of people first consultant
Experience: 9-12 years
Location: Bangalore
Job Description
Strong Experience across Applications Migration to Cloud, Cloud native Architecture, Amazon EKS, Serverless (Lambda).
Delivery of customer Cloud Strategies aligned with customers business objectives and with a focus on Cloud Migrations and App Modernization.
Design of clients Cloud solutions with a focus on AWS.
Undertake short-term delivery engagements related to cloud architecture with a specific focus on AWS and Cloud Migrations/Modernization.
Provide leadership in migration and modernization methodologies and techniques including mass application movements into the cloud.
Implementation of AWS within in large regulated enterprise environments.
Nurture Cloud computing expertise internally and externally to drive Cloud Adoption.
Work with designers and developers in the team to guide them through the solution implementation.
Participate in performing Proof of Concept (POC) for various upcoming technologies to fit in business requirement.A global business process management company
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
Essential Skills:
- 8 years experience delivering highly available web/mobile applications including 1-3 years as Senior/Lead developer. Prior experience in retail domain is a plus
- 3 years of experience working with distributed teams
- Deep knowledge of UI Libraries / Frameworks, API (REST), API Management and building scalable and high performance Web APIs
- Must have experience building websites using Javascript and Java technologies (e.g. Typescript, Spring Boot). Search Engine and Native App development experience is a plus.
- AWS Serverless Cloud native services experience with Lambda functions, SNS, SQS, DynamoDB, API Gateway etc
- Strong knowledge of Caching frameworks, data structures, algorithms, operating systems, and distributed systems
- Strong understanding of databases, NoSQL data stores, storage and distributed persistence technologies
- Strong communication and presentation skills
- Passionate about enabling next generation experiences
- Experience with automated testing, deployment pipelines and cloud based infrastructure
ROLE DESCRIPTIPON:
- Develop omni-channel digital solutions leveraging serverless and micro services in a cloud based platform to develop backend services.
- Design high/low level solutions, contribute towards architecture and technical roadmap
- Lead technical implementation/delivery.
- Host/Lead technical discussions
- Champion software development best practices, test driven development, CI and CD
- Build cloud native and highly cost efficient solutions
- Innovate, Unlearn and Disrupt. Research next generation frameworks and technologies. Embrace change.
We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
Skills:
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.
- Must have 6+ years of experience in C/C++ programming language.
- Knowledge of Go programming language and Python programming language is a big plus.
- Strong background in L4-L7 Internet Protocols TCP, HTTP, HTTP2, GRPC and HTTPS/SSL/TLS.
- Background in Internet security related products such as Web Application Firewalls, API Security Gateways, Reverse Proxies and Forward Proxies
- Proven knowledge of Linux kernel internals (process scheduler, memory management, etc.)
- Experience with eBPF is a plus.
- Hands-on experience in cloud architectures (SaaS, PaaS, IaaS, distributed systems) with continuous delivery
- Familiar with containerization solutions like Docker/Kubernetes etc.
- Familiar with serverless technologies such as AWS Lambda.
- Exposure to machine learning technologies and distributed systems is a plus
- B.E/B.Tech/MS degree in Computer Science, or equivalent
15 years US based Product Company
- Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality.
- Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling.
- Experience with the SIF framework including real-time integration
- Should have experience in building C360 Insights using Informatica
- Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL.
- Should have experience in building different data warehouse architecture like Enterprise,
- Federated, and Multi-Tier architecture.
- Should have experience in configuring Informatica Data Director in reference to the Data
- Governance of users, IT Managers, and Data Stewards.
- Should have good knowledge in developing complex PL/SQL queries.
- Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow.
- Should know about Informatica Server installation and knowledge on the Administration console.
- Working experience with Developer with Administration is added knowledge.
- Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR.
- Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment
https://www.srclogix.com/">SourceLogix - is a California based software development company. Founded in 2007, we have successfully built our client base with our values: Talent. Team. Trust.
SourceLogix experts help Fortune 500 clients in the areas of SaaS, eCommerce, VoIP, CRM, Transaction systems & Predictive Analytics.
Our latest venture is to build our own Cloud-based SaaS platform to disrupt the Digital Marketing space with a revolutionary voice & video platform - to help customers significantly improve conversions. The platform is built on Amazon Web Services, Google Cloud, and Salesforce CRM to drive Voice & Video with analytics & deep learning, all to significantly improve conversion rates.
We are setting up our offices in Chennai and Bangalore. Looking for an experienced Lead Developer who can become our employee#1 in India & eventually grow into a CTO role.
Job Requirements:
- Node.js and/ or Python - expert level. Over 5 years.
- Deep Knowledge of Amazon web services (AWS). Over 2 years.
- Expertise building on Lambda, API gateway, AWS Cognito
- Dynamo DB, Amazon RDS.
- S3, EC2, Route 53.
Role and Responsibilities:
- Build the platform from the ground up.
- Work with designers & developers to create signup, login, telephony, dashboards and control panel functions.
- Manage AWS instances and run the platform cost-effectively.
- Work with founders to build the roadmap and prioritize tasks.
- Communicate effectively - daily standups, weekly demos, etc.
Benefits:
- Competitive salary - based on experience.
- 100% remote work. All you need is a good laptop, headset, and internet connectivity.
- Flexible work: don't care when or where you work. We care how you work and how well you deliver.
- US-based team: open & transparent & professional.
- Ability to make a direct impact & build a great platform.
- Awesome culture! We are a friendly & collaborative team with a learning and growth mindset.
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database