50+ Amazon S3 Jobs in India
Apply to 50+ Amazon S3 Jobs on CutShort.io. Find your next job, effortlessly. Browse Amazon S3 Jobs and apply today!
Review Criteria:
- Strong Software Engineer fullstack profile using NodeJS / Python and React
- 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
- Must have strong experience in working on Typescript
- Must have experience in message-based systems like Kafka, RabbitMq, Redis
- Databases - PostgreSQL & NoSQL databases like MongoDB
- Product Companies Only
- Tier 1 Engineering Institutes (IIT, NIT, BITS, IIIT, DTU or equivalent)
Preferred:
- Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
- Experience in mentoring, coaching the team.
Role & Responsibilities:
We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.
The Ideal Candidate Will Be Able To-
- Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
- Mentor team members to develop in line with product requirements.
- Collaborate with Senior Architect for design and technology choices for product development roadmap.
- Do code reviews.
Ideal Candidate:
- Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
- Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
- Experience with AWS services such as S3, IAM, Lambda etc.
- Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
- Eye for user responsive designs on the frontend.
Perks, Benefits and Work Culture:
- We prioritize people above all else. While we're recognized for our innovative technology solutions, it's our people who drive our success. That’s why we offer a comprehensive and competitive benefits package designed to support your well-being and growth:
- Medical Insurance with coverage up to INR 8,00,000 for the employee and their family
To design, automate, and manage scalable cloud infrastructure that powers real-time AI and communication workloads globally.
Key Responsibilities
- Implement and mange CI/CD pipelines (GitHub Actions, Jenkins, or GitLab).
- Manage Kubernetes/EKS clusters
- Implement infrastructure as code (provisioning via Terraform, CloudFormation, Pulumi etc).
- Implement observability (Grafana, Loki, Prometheus, ELK/CloudWatch).
- Enforce security/compliance guardrails (GDPR, DPDP, ISO 27001, PCI, HIPPA).
- Drive cost-optimization and zero-downtime deployment strategies.
- Collaborate with developers to containerize and deploy services.
Required Skills & Experience
- 4–8 years in DevOps or Cloud Infrastructure roles.
- Proficiency with AWS (EKS, Lambda, API Gateway, S3, IAM).
- Experience with infrastructure-as-code and CI/CD automation.
- Familiarity with monitoring, alerting, and incident management.
What Success Looks Like
- < 10 min build-to-deploy cycle.
- 99.999 % uptime with proactive incident response.
- Documented and repeatable DevOps workflows.
Job Details
- Job Title: Lead I - Data Engineering
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 6-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
Job Title: Senior Data Engineer (Kafka & AWS)
Responsibilities:
- Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
- Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
- Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
- Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
- Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
- Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
- Uphold data security, governance, and compliance standards across all data operations.
Requirements:
- Minimum of 5 years of experience in Data Engineering or related roles.
- Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
- Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
- Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
- Excellent problem-solving, communication, and collaboration skills.
- Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Must-Haves
Minimum of 5 years of experience in Data Engineering or related roles.
Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
Excellent problem-solving, communication, and collaboration skills.
Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Notice period - 0 to 15days only
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Responsibilities:
- Design, build, and maintain backend services and APIs using Python frameworks such as FastAPI or Django.
- Implement RAG-based features and services, including document ingestion pipelines, vector indexing, and retrieval logic using modern LLM tooling.
- Build robust data ingestion, scraping, and automation workflows (web scraping, headless browsers, APIs) to integrate with third-party systems and internal tools.
- Develop and operate ETL/ELT pipelines to move, clean, and transform data across databases, file stores, and external platforms.
- Own reliability, performance, and observability of services: logging, metrics, alerting, and debugging in production.
- Collaborate closely with product and business stakeholders to translate ambiguous workflows into clear technical designs and automation logic.
- Write clean, testable code with solid unit/integration coverage, and contribute to internal libraries, tooling, and best practices documentation.
- Participate in code reviews, architectural discussions, and mentor junior engineers on Python, RAG patterns, and automation best practices.
Requirements:
- 4–6 years of hands-on experience as a Python engineer building production systems (FastAPI, Django, or similar).
- Strong understanding of backend fundamentals: REST APIs, authentication/authorisation, async patterns, background jobs, and task queues.
- Experience with event-driven architectures (Kafka, SQS, RabbitMQ) and workflow engines (e.g., Temporal, Airflow, Prefect).
- Practical experience with at least one LLM/RAG stack (e.g., LangChain, LlamaIndex, custom vector store integrations) and working with embeddings, chunking, and retrieval.
- Solid experience with web scraping and automation: requests/HTTP clients, Selenium/Playwright or similar, rate-limiting, anti-bot handling, and resilient scraping patterns.
- Experience building data pipelines or ETLs: extracting from APIs/files/DBs, transforming/cleaning, and loading into relational or NoSQL stores.
- Hands-on experience with AWS or similar cloud platforms (e.g., Lambda, S3, API Gateway, ECS/Fargate, or equivalent).
- Strong debugging skills and comfort with distributed, asynchronous systems and eventual consistency.
- Ability to take loosely defined business workflows and design clean, maintainable technical solutions.
- Strong communication skills and a habit of documenting decisions, APIs, and workflows.
Nice to Have:
- Experience with vector databases (e.g., Pinecone, Weaviate, Qdrant, OpenSearch vector, etc.) and search tuning.
- Exposure to building internal tools or low-code-like automation platforms for operations or support teams.
- Prior experience integrating with ERP/CRM/marketplace or other enterprise/legacy systems.
- Understanding of cloud security, IAM, and secret management best practices.
We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in
building large-scale data pipelines, real-time streaming solutions, and batch/stream
processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and
AWS Big Data services, with hands-on experience in implementing CDC (Change Data
Capture) pipelines and integrating multiple data sources and sinks.
Responsibilities
- Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
- Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
- Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
- Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
- Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
- Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
- Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
- Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
- Implement monitoring, logging, and alerting for critical data pipelines.
- Follow best practices for data security, compliance, and cost optimization in cloud environments.
Required Skills & Experience
- Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
- Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
- Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
- CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
- AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
- ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
- Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
- Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
- Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
- Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
- Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
- Functions for workflow orchestration.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Experience in large-scale data lake / lake house architectures.
- Knowledge of data warehousing concepts and query optimisation.
- Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
- Exposure to ML/AI data pipelines is a plus.
Tools & Technologies (must-have exposure)
- Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
- Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
- Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
- Programming & Scripting: Python, SQL, Bash
- Orchestration: Airflow / Step Functions
- Version Control & CI/CD: Git, Jenkins/CodePipeline
- Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field
Mumbai malad work from office
6 Days working
1 & 3 Saturday off
AWS Expertise: Minimum 2 years of experience working with AWS services like RDS, S3, EC2, and Lambda.
Roles and Responsibilities
1. Backend Development: Develop scalable and high-performance APIs and backend systems using Node.js. Write clean, modular, and reusable code following best practices. Debug, test, and optimize backend services for performance and scalability.
2. Database Management: Design and maintain relational databases using MySQL, PostgreSQL, or AWS RDS. Optimize database queries and ensure data integrity. Implement data backup and recovery plans.
3. AWS Cloud Services: Deploy, manage, and monitor applications using AWS infrastructure. Work with AWS services including RDS, S3, EC2, Lambda, API Gateway, and CloudWatch. Implement security best practices for AWS environments (IAM policies, encryption, etc.).
4. Integration and Microservices:Integrate third-party APIs and services. Develop and manage microservices architecture for modular application development.
5. Version Control and Collaboration: Use Git for code versioning and maintain repositories. Collaborate with front-end developers and project managers for end-to-end project delivery.
6. Troubleshooting and Debugging: Analyze and resolve technical issues and bugs. Provide maintenance and support for existing backend systems.
7. DevOps and CI/CD: Set up and maintain CI/CD pipelines. Automate deployment processes and ensure zero-downtime releases.
8. Agile Development:
Participate in Agile/Scrum ceremonies such as daily stand-ups, sprint planning, and retrospectives.
Deliver tasks within defined timelines while maintaining high quality.
Required Skills
Strong proficiency in Node.js and JavaScript/TypeScript.
Expertise in working with relational databases like MySQL/PostgreSQL and AWS RDS.
Proficient with AWS services including Lambda, S3, EC2, and API Gateway.
Experience with RESTful API design and GraphQL (optional).
Knowledge of containerization using Docker is a plus.
Strong problem-solving and debugging skills.
Familiarity with tools like Git, Jenkins, and Jira.
Job Title : Python Developer – API Integration & AWS Deployment
Experience : 5+ Years
Location : Bangalore
Work Mode : Onsite
Job Overview :
We are seeking an experienced Python Developer with strong expertise in API development and AWS cloud deployment.
The ideal candidate will be responsible for building scalable RESTful APIs, automating power system simulations using PSS®E (psspy), and deploying automation workflows securely and efficiently on AWS.
Mandatory Skills : Python, FastAPI/Flask, PSS®E (psspy), RESTful API Development, AWS (EC2, Lambda, S3, EFS, API Gateway), AWS IAM, CloudWatch.
Key Responsibilities :
Python Development & API Integration :
- Design, build, and maintain RESTful APIs using FastAPI or Flask to interface with PSS®E.
- Automate simulations and workflows using the PSS®E Python API (psspy).
- Implement robust bulk case processing, result extraction, and automated reporting systems.
AWS Cloud Deployment :
- Deploy APIs and automation pipelines using AWS services such as EC2, Lambda, S3, EFS, and API Gateway.
- Apply cloud-native best practices to ensure reliability, scalability, and cost efficiency.
- Manage secure access control using AWS IAM, API keys, and implement monitoring using CloudWatch.
Required Skills :
- 5+ Years of professional experience in Python development.
- Hands-on experience with RESTful API development (FastAPI/Flask).
- Solid experience working with PSS®E and its psspy Python API.
- Strong understanding of AWS services, deployment, and best practices.
- Proficiency in automation, scripting, and report generation.
- Knowledge of cloud security and monitoring tools like IAM and CloudWatch.
Good to Have :
- Experience in power system simulation and electrical engineering concepts.
- Familiarity with CI/CD tools for AWS deployments.
Job Title : Full Stack Drupal Developer
Experience : Minimum 5 Years
Location : Hyderabad / Bangalore / Mumbai / Pune / Chennai / Gurgaon (Hybrid or On-site)
Notice Period : Immediate to 15 Days Preferred
Job Summary :
We are seeking a skilled and experienced Full Stack Drupal Developer with a strong background in Drupal (version 8 and above) for both front-end and back-end development. The ideal candidate will have hands-on experience in AWS deployments, Drupal theming and module development, and a solid understanding of JavaScript, PHP, and core Drupal architecture. Acquia certifications and contributions to the Drupal community are highly desirable.
Mandatory Skills :
Drupal 8+, PHP, JavaScript, Custom Module & Theming Development, AWS (EC2, Lightsail, S3, CloudFront), Acquia Certified, Drupal Community Contributions.
Key Responsibilities :
- Develop and maintain full-stack Drupal applications, including both front-end (theming) and back-end (custom module) development.
- Deploy and manage Drupal applications on AWS using services like EC2, Lightsail, S3, and CloudFront.
- Work with the Drupal theming layer and module layer to build custom and reusable components.
- Write efficient and scalable PHP code integrated with JavaScript and core JS concepts.
- Collaborate with UI/UX teams to ensure high-quality user experiences.
- Optimize performance and ensure high availability of applications in cloud environments.
- Contribute to the Drupal community and utilize contributed modules effectively.
- Follow best practices for code versioning, documentation, and CI/CD deployment processes.
Required Skills & Qualifications :
- Minimum 5 Years of hands-on experience in Drupal development (Drupal 8 onwards).
- Strong experience in front-end (theming, JavaScript, HTML, CSS) and back-end (custom module development, PHP).
- Experience with Drupal deployment on AWS, including services such as EC2, Lightsail, S3, and CloudFront.
- Proficiency in JavaScript, core JS concepts, and PHP coding.
- Acquia certifications such as:
- Drupal Developer Certification
- Site Management Certification
- Acquia Certified Developer (preferred)
- Experience with contributed modules and active participation in the Drupal community is a plus.
- Familiarity with version control (Git), Agile methodologies, and modern DevOps tools.
Preferred Certifications :
- Acquia Certified Developer.
- Acquia Site Management Certification.
- Any relevant AWS certifications are a bonus.
Hi Kirti,
Job Title: Data Analytics Engineer
Experience: 3 to 6 years
Location: Gurgaon (Hybrid)
Employment Type: Full-time
Job Description:
We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers, SQL querying for data warehouses like Amazon Redshift, and experience with Data Lakes using S3. A foundational understanding of Apache Parquet and Python is also desirable.
Key Responsibilities:
1. Data Pipeline Development & Maintenance
- Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose.
- Ensure seamless data replication and transformation across multiple systems.
- Implement and optimize CDC-based data pipelines from various source systems.
2. Data Layering & Warehouse Management
- Implement Bronze, Silver, and Gold layer architectures to optimize data workflows.
- Design and manage data pipelines for structured and unstructured data.
- Ensure data integrity and quality within Redshift and other analytical data stores.
3. Database Management & SQL Development
- Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift.
- Design and implement data models that support business intelligence and analytics use cases.
4. Data Lakes & Storage Optimization
- Work with AWS S3-based Data Lakes to store and manage large-scale datasets.
- Optimize data ingestion and retrieval using Apache Parquet.
5. Data Integration & Automation
- Integrate diverse data sources into a centralized analytics platform.
- Automate workflows to improve efficiency and reduce manual effort.
- Leverage Python for scripting, automation, and data manipulation where necessary.
6. Performance Optimization & Monitoring
- Monitor data pipelines for failures and implement recovery strategies.
- Optimize data flows for better performance, scalability, and cost-effectiveness.
- Troubleshoot and resolve ETL and data replication issues proactively.
Technical Expertise Required:
- 3 to 6 years of experience in Data Engineering, ETL Development, or related roles.
- Hands-on experience with Qlik Replicate & Qlik Compose for data integration.
- Strong SQL expertise, with experience in writing and optimizing queries for Redshift.
- Experience working with Bronze, Silver, and Gold layer architectures.
- Knowledge of Change Data Capture (CDC) pipelines from multiple sources.
- Experience working with AWS S3 Data Lakes.
- Experience working with Apache Parquet for data storage optimization.
- Basic understanding of Python for automation and data processing.
- Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus.
- Strong analytical and problem-solving skills.
- Ability to work in a fast-paced, agile environment.
Preferred Qualifications:
- Experience in performance tuning and cost optimization in Redshift.
- Familiarity with big data technologies such as Spark or Hadoop.
- Understanding of data governance and security best practices.
- Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI.
We are seeking a skilled and motivated Software Engineer with over 3 years of experience in designing and developing web-based applications using Node.js.
Key Responsibilities
- Design, develop, and maintain web-based applications using Node.js.
- Build scalable, high-performance RESTful APIs using Express.js or Restify frameworks.
- Develop and maintain robust SQL database systems, leveraging Sequelize ORM.
- Ensure responsiveness of applications across various devices and platforms.
- Collaborate with cross-functional teams during the product development lifecycle, including prototyping, hardening, and testing phases.
- Work with real-time communication technologies and ensure seamless integration.
- Learn and adapt to alternative technologies as needed to meet project requirements.
Required Skills & Experience
- 3+ years of experience in web application development using Node.js.
- Proficiency with frameworks such as Express.js or Restify.
- Strong expertise in SQL databases and experience with Sequelize ORM.
- In-depth understanding of JavaScript, browser technologies, and real-time communication.
- Hands-on experience in developing responsive web applications.
- Experience with React Native (a plus).
- Proficiency in Java.
- Familiarity with product development lifecycle, including prototyping, testing, and deployment.
Additional Skills & Experience
- Experience with NoSQL databases such as MongoDB or Cassandra.
- Knowledge of internationalization (i18n) and latest UI/UX design trends.
- Familiarity with JavaScript libraries/frameworks like ReactJS or VueJS.
- Experience integrating payment gateways for various countries.
- Strong communication skills and ability to facilitate group discussions effectively.
- Eagerness to contribute to product functionality and user experience designs.
Education Requirements
- Bachelor's or Master's degree in Computer Science or a related field.
Job Title : Sr. Data Engineer
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2-11 PM
Availability : Immediate
Job Description :
- We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
- The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
- You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
- Strong debugging, collaboration, and communication skills are essential.
Job Title : Senior AWS Data Engineer
Experience : 5+ Years
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Good understanding and experience of HTML / CSS / JavaScript.
Hands-on experience with ES6 / ES7 / ES8 features.
Thorough understanding of the Request Lifecycle (including Event Queue, Event Loop,
Worker Threads, etc).
Familiarity with security principles including SSL protocols, data encryption, XSS, CSRF.
Expertise in Web Services / REST APIs will be beneficial.
Proficiency in Linux and deployment on Linux are valuable.
Knowledge about ORM like Sequelize and ODM like Mongoose and the ability to handle
DB transactions is a necessity.
Experience with Angular JS / React JS will be an added advantage.
Expertise with RDBMS like MySQL / PostgreSQL will be a plus.
Knowledge of AWS services like S3, EC2 will be helpful.
Understanding of Agile and CI/CD will be of value.
Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
BACKEND DEVELOPER JOB DESCRIPTION
Job Title: Backend Developer - Node.js & MongoDB
Location: Hyderabad
Employment Type: Full-Time
Experience Required: 3–5 Years
About Us
Inncircles – THE INNGINEERING COMPANY
We are a forward-thinking construction-tech innovator building CRM solutions that manage crores of records with precision and speed. Our mission is to revolutionize the construction domain through scalable engineering and robust backend systems. Join us to solve complex challenges and shape the future of data-driven construction tech!
Job Description
We are hiring a Backend Developer with 3–5 years of hands-on experience in Node.js and MongoDB to design, optimize, and maintain high-performance backend systems. You will work on large-scale data processing, external integrations, and scalable architectures while ensuring best coding practices and efficient database design.
Key Responsibilities
Backend Development & Optimization
- Develop and maintain RESTful/GraphQL APIs using Node.js, adhering to best coding practices and reusable code structures.
- Write optimized MongoDB queries for collections with crores of records, ensuring efficient data retrieval and storage.
- Design MongoDB collections, implement indexing strategies, and optimize replica sets for performance and reliability.
- Scalability & Performance
- Implement horizontal and vertical scaling strategies to handle growing data and traffic.
- Optimize database performance through indexing, aggregation pipelines, and query tuning.
- External Integrations & Debugging
- Integrate third-party APIs (payment gateways, analytics tools, etc.) and SDKs seamlessly into backend systems.
- Debug and resolve complex issues in production environments with a systematic, data-driven approach.
AWS & Cloud Services
Work with AWS services like Lambda (serverless), SQS (message queuing), S3 (storage), and EC2 (compute) to build resilient and scalable solutions.
Collaboration & Best Practices
Collaborate with frontend teams to ensure smooth API integrations and data flow.
Document code, write unit/integration tests, and enforce coding standards.
Mandatory Requirements
3–5 years of professional experience in Node.js and MongoDB.
Expertise in:
- MongoDB: Collection design, indexing, aggregation pipelines, replica sets, and sharding.
- Node.js: Asynchronous programming, middleware, and API development (Express.js/Fastify).
- Query Optimization: Writing efficient queries for large datasets (crores of records).
- Strong debugging skills and experience in resolving production issues.
- Hands-on experience with external integrations (APIs, SDKs, webhooks).
- Knowledge of horizontal/vertical scaling techniques and performance tuning.
- Familiarity with AWS services (Lambda, SQS, S3, EC2).
Preferred Skills
- Experience with microservices architecture.
- Knowledge of CI/CD pipelines (GitLab CI, Jenkins).
- Understanding of Docker, Kubernetes, or serverless frameworks.
- Exposure to monitoring tools like Prometheus, Grafana, or New Relic.
Why Join Inncircles?
Solve large-scale data challenges in the construction domain.
Work on cutting-edge cloud-native backend systems.
Competitive salary, flexible work culture, and growth opportunities.
Apply Now:
If you’re passionate about building scalable backend systems and thrive in a data-heavy environment, share your resume and a GitHub/portfolio link showcasing projects with Node.js, MongoDB, and AWS integrations.
Inncircles – THE INNGINEERING COMPANY
📍 Hyderabad | 🚀 Building Tomorrow’s Tech Today
Job Title : MERN Stack Developer
Experience : 5+ Years
Shift Timings : 8:00 AM to 5:00 PM
Role Overview:
We are hiring a skilled MERN Stack Developer to build scalable web applications. You’ll work on both front-end and back-end, leveraging modern frameworks and cloud technologies to deliver high-quality solutions.
Key Responsibilities :
- Develop responsive UIs using React, GraphQL, and TypeScript.
- Build back-end APIs with Node.js, Express, and MySQL.
- Integrate AWS services like Lambda, S3, and API Gateway.
- Optimize deployments using AWS CDK and CloudFormation.
- Ensure code quality with Mocha/Chai/Sinon, ESLint, and Prettier.
Required Skills :
- Strong experience with React, Node.js, and GraphQL.
- Proficiency in AWS services and Infrastructure as Code (CDK/Terraform).
- Familiarity with MySQL, Elasticsearch, and modern testing frameworks.
Job description
Key Responsibilities
- Design, develop, and maintain serverless applications using AWS services such as Lambda, API Gateway, DynamoDB, and S3.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic.
- Build and maintain RESTful APIs to support web and mobile applications.
- Implement security best practices for AWS services and manage IAM roles and policies.
- Optimize application performance, scalability, and reliability through monitoring and testing.
- Write clean, maintainable, and efficient code following best practices and design patterns.
- Participate in code reviews, providing constructive feedback to peers.
- Troubleshoot and debug applications, identifying performance bottlenecks and areas for improvement.
- Stay updated with emerging technologies and industry trends related to serverless architectures and Python development.
Qualifications
- Bachelors degree in Computer Science, Engineering, or related field, or equivalent experience.
- Proven experience as a Python backend developer, with a strong portfolio of serverless applications.
- Proficiency in AWS services, particularly in serverless architectures (Lambda, API Gateway, DynamoDB, etc.).
- Solid understanding of RESTful API design principles and best practices.
- Familiarity with CI/CD practices and tools (e.g., AWS CodePipeline, Jenkins).
- Experience with containerization technologies (Docker, Kubernetes) is a plus.
- Strong problem-solving skills and the ability to work independently and collaboratively.
- Excellent communication skills, both verbal and written.
Preferred Skills
- Experience with frontend technologies (JavaScript, React, Angular) is a plus.
- Knowledge of data storage solutions (SQL and NoSQL databases).
- AWS certifications (e.g., AWS Certified Developer Associate) are a plus.
Qualifications:*
1. 10+ years of experience, with 3+ years as Database Architect or related role
2. Technical expertise in data schemas, Amazon Redshift, Amazon S3, and Data Lakes
3. Analytical skills in data warehouse design and business intelligence
4. Strong problem-solving and strategic thinking abilities
5. Excellent communication skills
6. Bachelor's degree in Computer Science or related field; Master's degree preferred
*Skills Required:*
1. Database architecture and design
2. Data warehousing and business intelligence
3. Cloud-based data infrastructure (Amazon Redshift, S3, Data Lakes)
4. Data governance and security
5. Analytical and problem-solving skills
6. Strategic thinking and communication
7. Collaboration and team management
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
We're seeking an experienced Senior Tech Lead to oversee both frontend and backend teams for a production-ready enterprise project. You should possess strong managerial skills along with an entrepreneurial mindset. In this dynamic role, you'll collaborate with cross-functional teams to design, build, and deploy products aligned with our vision and strategy. Your leadership will be key in driving product success from conception to launch, ensuring they meet business objectives and user expectations.
Experience: 7+ Years
Working Time: 12.30 PM to 9.30 PM
Responsibilities:
- Lead and mentor developers: Provide guidance and support to ensure high-quality deliverables and drive engineering best practices. Experienced in leading development teams, fostering collaboration, and mentoring junior engineers for adherence to best practices.
- Collaborate with cross-functional teams: Define project requirements, timelines, and priorities, and coordinate product releases.
- Architect scalable solutions: Design systems for both frontend and backend that are maintainable using modern architecture patterns and RESTful API design principles.
- Project Management Skills: Proficient in Agile methodologies, managing timelines, and prioritizing tasks effectively to ensure project success.
- Define product features: Set sprint goals and translate user feedback into actionable enhancements.
- Problem-Solving Skills: Analytical problem-solver adept at addressing technical challenges and implementing practical solutions.
- Analyze data: Validate product goals and adapt strategies accordingly, and track project progress to ensure timely delivery of features.
- Test and accept product features: Ensure accurate implementation of product features based on user stories.
Requirements
- Bachelor's / Master’s degree in Computer Science or related field.
- Minimum of 3 years of experience in a leadership role.
- Nice to have: Experience in building a product from concept to launch.
- Excellent communication and interpersonal skills, with the ability to collaborate effectively across teams.
- Strong proficiency in NodeJS, RESTful APIs, Weaviate Vector Database, and graph databases.
- Proficient in NestJS with full lifecycle experience and expertise in MongoDB integration.
- Proficient in MongoDB, with expertise in NoSQL principles, instance management, data modeling, and efficient query optimization for cloud and on-premise environments.
- Strong proficiency in ReactJS, NextJS, MaterialUI, and React Query.
- Proficient in TypeScript development, skilled in building type-safe applications and leveraging TypeScript configurations for enhanced development efficiency.
- Proficient in AWS services, specializing in Lambda for serverless computing, API,Gateway for secure API management, and integration with IAM, S3, and CloudWatch.
- Knowledge of CI/CD principles and tools to automate the testing and deployment of applications.
Benefits
- Gain real-world experience in corporate functioning.
- Learn to collaborate with diverse teams and meet deadlines in a professional environment.
- Access various learning and development programs to explore your passion.
- Work in a fast-paced, rapidly expanding tech team undergoing a revamp, with exposure to advanced technology and tools relevant to your role
We are Seeking:
1. AWS Serverless, AWS CDK:
Proficiency in developing serverless applications using AWS Lambda, API Gateway, S3, and other relevant AWS services.
Experience with AWS CDK for defining and deploying cloud infrastructure.
Knowledge of serverless design patterns and best practices.
Understanding of Infrastructure as Code (IaC) concepts.
Experience in CI/CD workflows with AWS CodePipeline and CodeBuild.
2. TypeScript, React/Angular:
Proficiency in TypeScript.
Experience in developing single-page applications (SPAs) using React.js or Angular.
Knowledge of state management libraries like Redux (for React) or RxJS (for Angular).
Understanding of component-based architecture and modern frontend development practices.
3. Node.js:
Strong proficiency in backend development using Node.js.
Understanding of asynchronous programming and event-driven architecture.
Familiarity with RESTful API development and integration.
4. MongoDB/NoSQL:
Experience with NoSQL databases and their use cases.
Familiarity with data modeling and indexing strategies in NoSQL databases.
Ability to integrate NoSQL databases into serverless architectures.
5. CI/CD:
Ability to troubleshoot and debug CI/CD pipelines.
Knowledge of automated testing practices and tools.
Understanding of deployment automation and release management processes.
Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field.
Certification(Preferred-Added Advantage):AWS certifications (e.g., AWS Certified Developer - Associate)
Simply Fleet is a fast-growing SaaS solution to help automate an organization's fleet maintenance operations. You can learn more about our product by going to www.simply-fleet.com
We are looking for an enthusiastic and proactive Android developer to manage the app development of Simply Fleet on Android. The developer will work closely with the other members of the product team to build and maintain Simply Fleet's Android app.
What you will do:
- You should be proficient in Android development with Kotlin
- You should have a fair idea about web services
- You should be comfortable working with JSON
- You should have a strong knowledge of Android UI design principles, patterns, and best practices
- You should have a working knowledge of location services
- Knowledge of AWS S3 is a plus
- You should have an understanding of code versioning tools, such as Git
- You should have deployed apps on Google Play Console
- We expect you to be proficient in best coding practices like adding comments, using proper naming conventions, performing unit testing of your code, etc.
- You should be well versed in developing in Android Studio
Who you are?
- You should have 2+ years of experience in Android development
- You should be committed since we follow a hybrid model
- You are expected to be present in our physical office in Pune, MH twice a week
- You should be willing to take complete ownership of your work
- Above all, you should be able to think independently and creatively
You can expect a smooth onboarding process with structured timelines. You can expect teams that listen and learn. You can expect to be counted on, and you'll be given the freedom to do your best work. We build our product, our teams, and our company for the long haul, so you can build your career here if you choose to. This is your platform to be a part of a growing startup and to work with some really awesome folks. We will make sure you have fun along the way.
Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:
- Bachelor’s or Master’s degree in Computer Science or equivalent industry technical skills
- Mandatory In-depth knowledge and strong experience in Python programming language.
- Expertise and significant work experience in Python with Fastapi and Async frameworks.
- Prior experience building Microservice and/or modular monolith.
- Should be an expert Object Oriented Programming and Design Patterns.
- Has knowledge and experience with SQLAlchemy/ORM, Celery, Flower, etc.
- Has knowledge and experience with Kafka / RabbitMQ, Redis.
- Experience in Postgres/ Cockroachdb.
- Experience in MongoDB/DynamoDB and/or Cassandra are added advantages.
- Strong experience in either AWS services (e.g, EC2, ECS, Lambda, StepFunction, S3, SQS, Cognito). and/or equivalent Azure services preferred.
- Experience working with Docker required.
- Experience in socket.io added advantage
- Experience with CI/CD e.g. git actions preferred.
- Experience in version control tools Git etc.
This is one of the early positions for scaling up the Technology team. So culture-fit is really important.
- The role will require serious commitment, and someone with a similar mindset with the team would be a good fit. It's going to be a tremendous growth opportunity. There will be challenging tasks. A lot of these tasks would involve working closely with our AI & Data Science Team.
- We are looking for someone who has considerable expertise and experience on a low latency highly scaled backend / fullstack engineering stack. The role is ideal for someone who's willing to take such challenges.
- Coding Expectation – 70-80% of time.
- Has worked with enterprise solution company / client or, worked with growth/scaled startup earlier.
- Skills to work effectively in a distributed and remote team environment.
- AWS Cloud Solutions: We expect you to have a strong understanding of AWS services and how to architect solutions using them.
- Use of DynamoDB, S3, Vault, Lambda, or other AWS infrastructure components.
- Design scalable, highly available, and fault-tolerant cloud architectures.
- Continuous Integration and Deployment (CI/CD): Understanding CI/CD pipelines and tools like AWS CodePipeline, CodeCommit, and CodeDeploy is essential.
- Advanced concepts in React Native.
- Python knowledge: object-oriented programming: inheritance, abstract classes, dataclass, dependency injection, design patterns: comand-query, repository, adapter, hexagonal architecture, swagger/Open API, flask, connexion
- Experience on AWS services: lambda, ecs, sqs, s3, dynamodb, auroradb
- Experience with following libraries boto3, behave, pytest, moto, localstack, docker
- Basic knowledge about terraform, gitlab ci
- Experience with SQL DB
Python Developer
6-8 Years
Mumbai
N.p only immediate or who is serving LwD is 1st week of july.
- Python knowledge: object-oriented programming: inheritance, abstract classes, dataclass, dependency injection, design patterns: comand-query, repository, adapter, hexagonal architecture, swagger/Open API, flask, connexion
- Experience on AWS services: lambda, ecs, sqs, s3, dynamodb, auroradb
- Experience with following libraries boto3, behave, pytest, moto, localstack, docker
- Basic knowledge about terraform, gitlab ci
- Experience with SQL DB
Requirements
Experience
- 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
- Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
- Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
- Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
- Experience with GPU computing to do data and model training parallelism.
- Solid software engineering skills in developing systems for production.
- Strong expertise in Python.
- Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
- Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
- Having Geospatial / Remote sensing experience is a plus.
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Job Description:
We are looking for a talented Full Stack Developer with a strong background in Node.js, React.js, and AWS to contribute to the development and maintenance of our web applications. As a Full Stack Developer, you will work closely with cross-functional teams to design, develop, and deploy scalable and high-performance software solutions.
Responsibilities:
Collaborate with product managers and designers to translate requirements into technical specifications and deliver high-quality software solutions.
Develop and maintain web applications using Node.js and React.js frameworks.
Write clean, efficient, and well-documented code to ensure the reliability and maintainability of the software.
Implement responsive user interfaces, ensuring a seamless user experience across different devices and platforms.
Integrate third-party APIs and services to enhance application functionality.
Design and optimize databases to ensure efficient data storage and retrieval.
Deploy and manage applications on AWS cloud infrastructure, utilizing services such as EC2, S3, Lambda, and API Gateway.
Monitor and troubleshoot application performance, identify and resolve issues proactively.
Conduct code reviews to maintain code quality standards and provide constructive feedback to team members.
Stay up to date with the latest trends and best practices in web development and cloud technologies.
Requirements:
Proven experience as a Full Stack Developer, working with Node.js and React.js in a professional setting.
Strong proficiency in JavaScript and familiarity with modern front-end frameworks and libraries.
Experience with AWS services, such as EC2, S3, Lambda, API Gateway, and CloudFormation.
Knowledge of database systems, both SQL and NoSQL, and the ability to design efficient data models.
Familiarity with version control systems (e.g., Git) and agile development methodologies.
Ability to write clean, efficient, and well-documented code, following best practices and coding standards.
Strong problem-solving skills and the ability to work effectively in a fast-paced environment.
Excellent communication and collaboration skills, with the ability to work well in a team.
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
DATA ENGINEER
Overview
They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.
Job Description:
We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.
Responsibilities:
Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.
Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.
Optimize and tune the performance of data systems to ensure efficient data processing and analysis.
Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.
Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.
Implement and maintain data governance and security measures to protect sensitive data.
Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.
Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.
Qualifications:
Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.
Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).
Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).
Solid understanding of data modeling, data warehousing, and ETL principles.
Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).
Strong problem-solving and analytical skills, with the ability to handle complex data challenges.
Excellent communication and collaboration skills to work effectively in a team environment.
Preferred Qualifications:
Advanced knowledge of distributed computing and parallel processing.
Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).
Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).
Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
Experience with data visualization and reporting tools (e.g., Tableau, Power BI).
Certification in relevant technologies or data engineering disciplines.
Responsibilities
- Implement various development, testing, automation tools, and IT infrastructure
- Design, build and automate the AWS infrastructure (VPC, EC2, Networking, EMR, RDS, S3, ALB, Cloud Front, etc.) using Terraform
- Manage end-to-end production workloads hosted on Docker and AWS
- Automate CI pipeline using Groovy DSL
- Deploy and configure Kubernetes clusters (EKS)
- Design and build a CI/CD Pipeline to deploy applications using Jenkins and Docker
Eligibility
- At least 8 years of proven experience in AWS-based DevOps/cloud engineering and implementations
- Expertise in all common AWS Cloud services like EC2, EKS, S3, VPC, Lambda, API Gateway, ALB, Redis, etc.
- Experience in deploying and managing production environments in Amazon AWS
- Strong experience in continuous integration and continuous deployment
- Knowledge of application build, deployment, and configuration using one of the tools: Jenkins
Qualifications & Experience:
▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design
▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.
▪ Expert in SQL, worked on advanced SQL for at least 2+ years
▪ Good development skills in Java, Python or other languages
▪ Experience with EMR, S3
▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview
▪ Comfortable working in an agile environment
Job DescriptionPosition: Sr Data Engineer – Databricks & AWS
Experience: 4 - 5 Years
Company Profile:
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are transforming ourselves and rapidly expanding our business.
Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik in 2017.
Get to know more about us on our website: http://www.exponentia.ai/ and Life @Exponentia.
Role Overview:
· A Data Engineer understands the client requirements and develops and delivers the data engineering solutions as per the scope.
· The role requires good skills in the development of solutions using various services required for data architecture on Databricks Delta Lake, streaming, AWS, ETL Development, and data modeling.
Job Responsibilities
• Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization.
• Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services.
• Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data.
• Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts and demonstrating it to stakeholders.
Technical Skills
• Has more than 2 Years of experience in developing data lakes, and datamarts on the Databricks platform.
• Proven skill sets in AWS Data Lake services such as - AWS Glue, S3, Lambda, SNS, IAM, and skills in Spark, Python, and SQL.
• Experience in Pentaho
• Good understanding of developing a data warehouse, data marts etc.
• Has a good understanding of system architectures, and design patterns and should be able to design and develop applications using these principles.
Personality Traits
• Good collaboration and communication skills
• Excellent problem-solving skills to be able to structure the right analytical solutions.
• Strong sense of teamwork, ownership, and accountability
• Analytical and conceptual thinking
• Ability to work in a fast-paced environment with tight schedules.
• Good presentation skills with the ability to convey complex ideas to peers and management.
Education:
BE / ME / MS/MCA.
JD / Skills Sets
1. Good knowledge on Python
2. Good knowledge on My-Sql, mongodb
3. Design Pattern
4. OOPs
5. Automation
6. Web scraping
7. Redis queue
8. Basic idea of Finance Domain will be beneficial.
9. Git10. AWS (EC2, RDS, S3)
Job Title: PHP (Laravel) Developer
Experience: 2 to 7 years
Skills:
PHP : Laravel, (MVC)
Database: MySQL
{(added Advantage)
Database : Mongo DB
Server Hosting : – AWS (Knowledge of EC2, S3, RDS & Route53)
Good to have experience in ReactJS, NodeJS, VueJS}
Communication skills: Must have good in communication
.
Requirements:
Understanding of MVC design patterns
Basic understanding of front-end technologies, such as JavaScript, HTML5, and
CSS3
Knowledge of object oriented PHP programming
In-depth knowledge of object-oriented PHP 7.x and Laravel 5/6+ PHP Framework
Experience with MVC, Entity Framework, Web form, Web API and business layer
and
front-end technologies
Creating database schema that represent and support business processes
Familiarity with SQL/NoSQL databases and their declarative query languages
In-depth knowledge of Git,Bitbucket and related pipeline for Continues integration
and
Continues deployment
Creative and efficient problem solving capability
Understanding of Agile development process
Developing rich and complex web applications in an efficient manner so that the
applications let the user interact with the site or application smoothly.
Ability to understand technical documents like SRS, Design Document & Wireframes.
Personal Specifications –
- Proficiency in written and spoken English(must)
- Understanding of project development methodologies like Agile is preferred.
- Understand team development/Source code control
· 4+ years of experience as a Python Developer.
· Good Understanding of Object-Oriented Concepts and Solid principles.
· Good Understanding in Programming and analytical skills.
· Should have hands on experience in AWS Cloud Service like S3, Lambda functions Knowledge. (Must Have)
· Should have experience Working with large datasets (Must Have)
· Proficient in using NumPy, Pandas. (Must Have)
· Should have hands on experience on Mysql (Must Have)
· Should have experience in debugging Python applications (Must have)
· Knowledge of working on Flask.
· Knowledge of object-relational mapping (ORM).
· Able to integrate multiple data sources and databases into one system
· Proficient understanding of code versioning tools such as Git, SVN
· Strong at problem-solving and logical abilities
· Sound knowledge of Front-end technologies like HTML5, CSS3, and JavaScript
· Strong commitment and desire to learn and grow.
Summary:
The Learner Company is an education start-up that designs personalized learning experiences by integrating them with the best of what technology offers. We are currently building an online learning engine to host adaptive online courses, simulations, and multiplayer games for institutional partners. We are now in the software development stage of the project.
We are looking for a full-stack developer to join our development team. The developer will be responsible for the overall development and implementation of front and back-end software applications. Their responsibilities will extend from designing system architecture to high-level programming, performance testing, and systems integration.
We are looking for an individual who is optimistic about technology and people, is open to and excited by new ideas, and considers themselves a life-long learner.
Responsibilities:
- Meeting with the software development team to define the scope and scale of software projects.
- Designing software system architecture.
- Completing data structures and design patterns.
- Designing and implementing scalable web services, applications, and APIs.
- Developing and maintaining internal software tools.
- Writing low-level and high-level code.
- Troubleshooting and bug fixing.
- Identifying bottlenecks and improving software efficiency.
- Collaborating with the design team on developing micro-services.
- Writing technical documents.
Required Competencies:
- Bachelor’s degree in computer engineering or computer science.
- Previous experience as a full stack engineer.
- Advanced knowledge of front-end languages including HTML5, CSS, TypeScript, JavaScript, C++, JQuery, React.js and Next.js.
- Knowledge of relational database systems and SQL.
- Familiarity with AWS architecture and working knowledge of services like S3, SES, EC2, RDS and more.
- Proficient in back-end languages including Java, Python, Rails, Ruby, .NET, and PHP.
- Advanced troubleshooting skills.
- Familiarity with MS Word, Excel, PowerPoint, Notion, Veed.io, Linear, Intercom, Plateau, and Miro.
- A strong belief that a team as a whole is greater than the sum of its parts.
- Excellent leadership, communication, and organization skills
Experience Needed: 2+ Years
Location: Bengaluru
- Meeting with the software development team to define the scope and scale of software projects.
- Designing software system architecture.
- Completing data structures and design patterns.
- Designing and implementing scalable web services, applications, and APIs.
- Developing and maintaining internal software tools.
- Writing low-level and high-level code.
- Troubleshooting and bug fixing.
- Identifying bottlenecks and improving software efficiency.
- Collaborating with the design team on developing micro-services.
- Writing technical documents.
Java Developers [I+S/E2-MM2]
Java Full Stack Developer
We are looking for a skilled Full Stack Developer who is passionate about building high-quality software applications. The ideal candidate will have expertise in Java frameworks and extensions, persistence frameworks, servers, platforms, clouds, databases, data storage, and QA tools. The candidate should also have experience working with Angular, React, or Vue.
As a Full Stack Developer, you will be responsible for developing and maintaining software applications for our clients. You will work closely with a team of developers and project managers to deliver high-quality software products. You should be comfortable working in a fast-paced environment and be able to adapt to changing priorities.
Mandatory Skill Sets:
Java frameworks and extensions: You should be proficient in building enterprise-grade applications using Java 8+ and Spring Boot.
Persistence Frameworks: You should have experience working with Hibernate and/or JPA. You should be able to design and develop efficient data models, and perform CRUD operations using Hibernate and/or JPA.
Servers: You should be familiar with Apache Tomcat and be able to deploy applications on Tomcat servers.
Platforms: You should have experience working with Java EE and Jakarta 2EE platforms.
Clouds: You should have experience working with AWS, and be familiar with AWS services such as EC2, S3, and RDS.
Databases / Data Storage: You should have experience working with MYSQL and Oracle databases
QA Tools: You should be proficient in JUnit5 and Postman. You should be able to write and execute unit tests, integration tests, and end-to-end tests using these tools.
Web Services: You should have experience working with RESTful web services.
API Security: You should be familiar with OAuth2, JWT, Auth0, or any other API security frameworks.
Angular/React/Vue: You should have experience working with at least one of these frontend frameworks, HTML, CSS, and JavaScript.
If you are passionate about building high-quality software applications and have the required skill sets, we encourage you to apply. We offer competitive salaries and benefits, and a challenging work environment where you can learn and grow.
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
What we are looking for:
● 3+ years’ experience developing Data & Analytic solutions
● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark
● Experience with relational SQL
● Experience with scripting languages such as Shell, Python
● Experience with source control tools such as GitHub and related dev process
● Experience with workflow scheduling tools such as Airflow
● In-depth knowledge of scalable cloud
● Has a passion for data solutions
● Strong understanding of data structures and algorithms
● Strong understanding of solution and technical design
● Has a strong problem-solving and analytical mindset
● Experience working with Agile Teams.
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
● Able to quickly pick up new programming languages, technologies, and frameworks
● Bachelor’s Degree in computer science
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.























