

š We're Hiring: Python AWS Fullstack Developer at InfoGrowth! š
Join InfoGrowth as a Python AWS Fullstack Developer and be a part of our dynamic team driving innovative cloud-based solutions!
Job Role: Python AWS Fullstack Developer
Location: Bangalore & Pune
Mandatory Skills:
- Proficiency in Python programming.
- Hands-on experience with AWS services and migration.
- Experience in developing cloud-based applications and pipelines.
- Familiarity with DynamoDB, OpenSearch, and Terraform (preferred).
- Solid understanding of front-end technologies: ReactJS, JavaScript, TypeScript, HTML, and CSS.
- Experience with Agile methodologies, Git, CI/CD, and Docker.
- Knowledge of Linux (preferred).
Preferred Skills:
- Understanding of ADAS (Advanced Driver Assistance Systems) and automotive technologies.
- AWS Certification is a plus.
Why Join InfoGrowth?
- Work on cutting-edge technology in a fast-paced environment.
- Collaborate with talented professionals passionate about driving change in the automotive and tech industries.
- Opportunities for professional growth and development through exciting projects.
š Apply Now to elevate your career with InfoGrowth and make a difference in the automotive sector!

Similar jobs
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
The clientās department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team.Ā In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Ā
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.

Job Overview:
Ā
You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.
Ā
Experience:
Ā
- Experience working on billing and budgets for a GCP project - MUST
- Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE
- Experience in implementing the recommendations on GCP
- Architect Certifications on GCP - MUST
- Excellent communication skills (both verbal & written) - MUST
- Excellent documentation skills on processes and steps and instructions- MUST
- At least 2 years of experience on GCP.
Ā
Basic Qualifications:
- Bachelorās/Masterās Degree in Engineering OR Equivalent.
- Extensive scripting or programming experience (Shell Script, Python).
- Extensive experience working with CI/CD (e.g. Jenkins).
- Extensive experience working with GCP, Azure, or Cloud Foundry.
- Experience working with databases (PostgreSQL, elastic search).
- Must have 2 years of minimum experience with GCP certification.
Ā
Benefits :
- Competitive salary.
- Work from anywhere.
- Learning and gaining experience rapidly.
- Reimbursement for basic working set up at home.
- Insurance (including top-up insurance for COVID).
Ā
Location :
Remote - work from anywhere.
Ideal joining preferences:
Immediate or 15 days
environments: AWS / Azure / GCP
⢠Must have strong work experience (2 + years) developing IaC (i.e. Terraform)
⢠Must have strong work experience in Ansible development and deployment.
⢠Bachelorās degree with a background in math will be a PLUS.
⢠Must have 8+ years experience with a mix of Linux and Window systems in a medium to large business
environment.
⢠Must have command level fluency and shell scripting experience in a mix of Linux and Windows
environments.
ā¢
⢠Must enjoy the experience of working in small, fast-paced teams
⢠Identify opportunities for improvement in existing process and automate the process using Ansible Flows.
⢠Fine tune performance and operation issues that arise with Automation flows.
⢠Experience administering container management systems like Kubernetes would be plus.
⢠Certification with Red Hat or any other Linux variant will be a BIG PLUS.
⢠Fluent in the use of Microsoft Office Applications (Outlook / Word / Excel).
⢠Possess a strong aptitude towards automating and timely completion of standard/routine tasks.
⢠Experience with automation and configuration control systems like Puppet or Chef is a plus.
⢠Experience with Docker, Kubernetes (or container orchestration equivalent) is nice to have
Key Responsibilities:
- Work with the development team to plan, execute and monitor deployments
- Capacity planning for product deployments
- Adopt best practices for deployment and monitoring systems
- Ensure the SLAs for performance, up time are met
- Constantly monitor systems, suggest changes to improve performance and decrease costs.
- Ensure the highest standards of security
Key Competencies (Functional):
Ā
- Proficiency in coding in atleast one scripting language - bash, Python, etc
- Has personally managed a fleet of servers (> 15)
- Understand different environments production, deployment and staging
- Worked in micro service / Service oriented architecture systems
- Has worked with automated deployment systems ā Ansible / Chef / Puppet.
- Can write MySQL queries
Key Responsibilities
⢠As a part of the DevOps team, you will be responsible for the configuration, optimization, documentation, and support of the CI/CD components.
⢠Creating and managing build and release pipelines with Azure DevOps and Jenkins.
⢠Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
⢠Troubleshoot server performance issues & handle the continuous integration system.
⢠Automate infrastructure provisioning using ARM Templates and Terraform.
⢠Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
⢠Diagnose and develop root-cause solutions for failures and performance issues in the production environment.
⢠Deploy and manage Infrastructure for production applications
⢠Configure security best practices for application and infrastructure
Ā
Essential Requirements
⢠Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
⢠Strong knowledge of CI/CD principles.
⢠Strong work experience with CI/CD implementation tools like Azure DevOps, Team City, Octopus Deploy, AWS Code Deploy, and Jenkins.
⢠Experience of writing automation scripts with PowerShell, Bash, Python, etc.
⢠GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
⢠Understanding of secure DevOps practices
Good to Have -
ā¢Knowledge of scripting languages such as PowerShell, Bash
⢠Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
⢠Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
⢠Strong communication skills and ability to explain protocol and processes with team and management.
⢠Must be able to handle multiple tasks and adapt to a constantly changing environment.
⢠Must have a good understanding of SDLC.
⢠Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
⢠Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
⢠Organized, flexible, and analytical ability to solve problems creatively
We are looking for a DevOps Engineer for managing the interchange of data between the server and the users. Your primary responsibility will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to request from the frontend. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of frontend technologies is necessary as well.
What we are looking for
- Must have strong knowledge of Kubernetes and Helm3
- Should have previous experience in Dockerizing the applications.
- Should be able to automate manual tasks using Shell or Python
- Should have good working knowledge on AWS and GCP clouds
- Should have previous experience working on Bitbucket, Github, or any other VCS.
- Must be able to write Jenkins Pipelines and have working knowledge on GitOps and ArgoCD.
- Have hands-on experience in Proactive monitoring using tools like NewRelic, Prometheus, Grafana, Fluentbit, etc.
- Should have a good understanding of ELK Stack.
- Exposure on Jira, confluence, and Sprints.
What you will do:
- Mentor junior Devops engineers and improve the teamās bar
- Primary owner of tech best practices, tech processes, DevOps initiatives, and timelines
- Oversight of all server environments, from Dev through Production.
- Responsible for the automation and configuration management
- Provides stable environments for quality delivery
- Assist with day-to-day issue management.
- Take lead in containerising microservices
- Develop deployment strategies that allow DevOps engineers to successfully deploy code in any environment.
- Enables the automation of CI/CD
- Implement dashboard to monitors various
- 1-3 years of experience in DevOps
- Experience in setting up front end best practices
- Working in high growth startups
- Ownership and Be Proactive.
- Mentorship & upskilling mindset.
- systems and applications
what youāll get- Health Benefits
- Innovation-driven culture
- Smart and fun team to work with
- Friends for life

Striim (pronounced āstreamā with two iās for integration and intelligence)Ā was founded in 2012 with a simple goal of helping companies make data useful the instant itās born.
Striimās enterprise-grade, streaming integration with intelligence platform makesĀ it easy to build continuous, streaming data pipelines ā including change data capture (CDC) ā to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations ⢠Awareness of Devops & Agile Methodologies ⢠Proficient in leveraging CI and CD tools to automate testing and deployment . ⢠Experience in working in an agile and fast paced environment . ⢠Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). ⢠Cloud networking knowledge: should understand VPC, NATs, and routers. ⢠Contributions to open source is a plus. ⢠Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.
Roles and Responsibilities
ā Managing Availability, Performance, Capacity of infrastructure and applications.
ā Building and implementing observability for applications health/performance/capacity.
ā Optimizing On-call rotations and processes.
ā Documenting ātribalā knowledge.
ā Managing Infra-platforms like
- Mesos/Kubernetes
- CICD
- Observability(Prometheus/New Relic/ELK)
- Cloud Platforms ( AWS/ Azure )
- Databases
- Data Platforms Infrastructure
ā Providing help in onboarding new services with the production readiness review process.
ā Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
ā Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
ā Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
ā Identifying observability gaps in product services, infrastructure and working with stake owners to fix it.
ā Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation.
ā Managing/Automating upgrades of the infrastructure services.
ā Automate toil work.
Experience & Skills
ā 3+ Years of experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
ā A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver.
ā A deep understanding of computer science, software development, and networking principles.
ā Demonstrated experience with languages, such as Python, Java, Golang etc.
ā Extensive experience with Linux administration and good understanding of the various linux kernel subsystems (memory, storage, network etc).
ā Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
ā Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
ā Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud.
ā Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc.
ā Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
ā Experience with multiple datastores is a plus (MySQL, PostgreSQL, Aerospike,
Couchbase, Scylla, Cassandra, Elasticsearch).
ā Experience with data platforms tech stacks like Hadoop, Hive, Presto etc is a plus
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the teamĀ
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solutionĀ
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform
What we are looking for
ļ· Work closely with product & engineering groups to identify and document
infrastructure requirements.
ļ· Design infrastructure solutions balancing requirements, operational
constraints and architecture guidelines.
ļ· Implement infrastructure including network connectivity, virtual machines
and monitoring.
ļ· Implement and follow security guidelines, both policy and technical to
protect our customers.
ļ· Resolve incidents as escalated from monitoring solutions and lower tiers.
ļ· Identify root cause for issues and develop long term solutions to fix recurring
issues.
ļ· Ability to automate recurring tasks to increase velocity and quality.
ļ· Partner with the engineering team to build software tolerance for
infrastructure failure or issues.
ļ· Research emerging technologies, trends and methodologies and enhance
existing systems and processes.
Qualifications
ļ· Masterās/Bachelors degree in Computer Science, Computer Engineering,
Electrical Engineering, or related technical field, and two years of experience
in software/systems or related.
ļ· 5+ years overall experience.
Work experience must have included:
ļ· Proven track record in deploying, configuring and maintaining Ubuntu server
systems on premise and in the cloud.
ļ· Minimum of 4 yearsā experience designing, implementing and troubleshooting
TCP/IP networks, VPN, Load Balancers & Firewalls.
ļ· Minimum 3 years of experience working in public clouds like AWS & Azure.
ļ· Hands on experience in any of the configuration management tools like Anisble,
Chef & Puppet.
ļ· Strong in performing production operation activities.
ļ· Experience with Container & Container Orchestrator tools like Kubernetes, Docker
Swarm is plus.
ļ· Good at source code management tools like Bitbucket, GIT.
ļ· Configuring and utilizing monitoring and alerting tools.
ļ· Scripting to automate infrastructure and operational processes.
ļ· Hands on work to secure networks and systems.
ļ· Sound problem resolution, judgment, negotiating and decision making skills
ļ· Ability to manage and deliver multiple project phases at the same time
ļ· Strong analytical and organizational skills
ļ· Excellent written and verbal communication skills
Interview focus areas
ļ· Networks, systems, monitoring
ļ· AWS (EC2, S3, VPC)
ļ· Problem solving, scripting, network design, systems administration and
troubleshooting scenarios
ļ· Culture fit, agility, bias for action, ownership, communication

