š We're Hiring: Python AWS Fullstack Developer at InfoGrowth! š
Join InfoGrowth as a Python AWS Fullstack Developer and be a part of our dynamic team driving innovative cloud-based solutions!
Job Role: Python AWS Fullstack Developer
Location: Bangalore & Pune
Mandatory Skills:
- Proficiency in Python programming.
- Hands-on experience with AWS services and migration.
- Experience in developing cloud-based applications and pipelines.
- Familiarity with DynamoDB, OpenSearch, and Terraform (preferred).
- Solid understanding of front-end technologies: ReactJS, JavaScript, TypeScript, HTML, and CSS.
- Experience with Agile methodologies, Git, CI/CD, and Docker.
- Knowledge of Linux (preferred).
Preferred Skills:
- Understanding of ADAS (Advanced Driver Assistance Systems) and automotive technologies.
- AWS Certification is a plus.
Why Join InfoGrowth?
- Work on cutting-edge technology in a fast-paced environment.
- Collaborate with talented professionals passionate about driving change in the automotive and tech industries.
- Opportunities for professional growth and development through exciting projects.
š Apply Now to elevate your career with InfoGrowth and make a difference in the automotive sector!
Similar jobs
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesnāt need to include model training, data gathering, etc. Weāre looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.Ā
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience:Ā 3 years to 6 years
Type, Location
Full Time @ Anywhere in India
Ā
Desired Experience
2+ years
Ā
Job Description
What Youāll Do
ā Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.
ā Take charge of DevOps activities for CI/CD with the latest tech stacks.
ā Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.
ā Implementing the DevOps philosophy and strategy across different domains in organisation.
ā Build automation at various levels, including code deployment to streamline release process
ā Will be responsible for architecture of cloud services
ā 24*7 monitoring of the infrastructure
ā Use programming/scripting in your day-to-day work
ā Have shell experience - for example Powershell on Windows, or BASH on *nix
ā Use a Version Control System, preferably git
ā Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)
ā Scalability, HA and troubleshooting of web-scale applications.
ā Infrastructure-As-Code tools like Terraform, CloudFormation
ā CI/CD systems such as Jenkins, CircleCI
ā Container technologies such as Docker, Kubernetes, OpenShift
ā Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA
Ā
What you bring to the table
ā Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.
ā Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.
ā DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems
ā Configuration management tools such as Ansible/Chef/Puppet
Ā
Ā
Bonus if you haveā¦
ā Basic understanding of Networking(routing, switching, dns) and Storage
ā Basic understanding of Protocol such as UDP/TCP
ā Basic understanding of Cloud computing
ā Basic understanding of Cloud computing models like SaaS, PaaS
ā Basic understanding of git or any other source code repo
ā Basic understanding of Databases(sql/no sql)
ā Great problem solving skills
ā Good in communication
ā Adaptive to learning
Devops Engineer
Roles and Responsibilities:
As a DevOps Engineer, youāll be responsible for ensuring that our products can be seamlessly deployed on infrastructure, whether it is on-prem or on public clouds.
- Create, Manage and Improve CI / CD pipelines to ensure our Platform and Applications can be deployed seamlessly
- Evaluate, Debug, and Integrate our products with various Enterprise systems & applications
- Build metrics, monitoring, logging, configurations, analytics and alerting for performance and security across all endpoints and applications
- Build and manage infrastructure-as-code deployment tooling, solutions, microservices and support services on multiple cloud providers and on-premises
- Ensure reliability, availability and security of our infrastructure and products
- Update our processes and design new processes as needed to optimize performance
- Automate our processes in compliance with our security requirements
- Manage code deployments, fixes, updates, and related processes
- Manage environment where we deploy our product to multiple clouds that we control as well as to client-managed environments
- Work with CI and CD tools, and source control such as GIT and SVN. DevOps Engineer
Skills/Requirements:
- 2+ years of experience in DevOps, SRE or equivalent positions
- Experience working with Infrastructure as Code / Automation tools
- Experience in deploying, analysing, and debugging on multiple environments (AWS, Azure, Private Clouds, Data Centres, etc), Linux/Unix administration, Databases such as MySQL, PostgreSQL, NoSQL, DynamoDB, Cosmos DB, MongoDB, Elasticsearch and Redis (both managed instances as well as self-installed).
- Knowledge of scripting languages such as Python, PowerShell and / or Bash.
- Hands-on experience with the following is a must: Docker, Kubernetes, ELK Stack
- Hands-on experience with at least three of the following- Terraform, AWS Cloud Formation, Jenkins, Wazuh SIEM, Ansible, Ansible Tower ,Puppet ,Chef
- Good troubleshooting skills with the ability to spot issues.
- Strong communication skills and documentation skills.
- Experience with deployments with Fortune 500 or other large Global Enterprise clients is a big plus
- Experience with participating in an ISO27001 certification / renewal cycle is a plus.
- Understanding of Information Security fundamentals and compliance requirements
Work From Home
Start Up Background is preferred
Company Location: Noida
The AWS Cloud/Devops Engineer will be working with the engineering team and focusing on AWS infrastructure and automation.Ā A key part of the role is championing and leading infrastructure as code.Ā The Engineer will work closely with the Manager of Operations and Devops to build, manage and automate our AWS infrastructure.Ā
Duties & Responsibilities:
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Qualifications:
- At least 1-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Expertise using Chef for configuration management
- Hands-on experience deploying and managing infrastructure with Terraform
- Solid foundation of networking and Linux administration
- Experience with CI-CD, Docker, GitLab, Jenkins, ELK and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Strong bias for action and ownership
- 3+ years experience leading a team of DevOps engineers
- 8+ years experience managing DevOps for large engineering teams developing cloud-native software
- Strong in networking concepts
- In-depth knowledge of AWS and cloud architectures/services.
- Experience within the container and container orchestration space (Docker, Kubernetes)
- Passion for CI/CD pipeline using tools such as Jenkins etc.
- Familiarity with config management tools like Ansible Terraform etc
- Proven record of measuring and improving DevOps metrics
- Familiarity with observability tools and experience setting them up
- Passion for building tools and productizing services that empower development teams.
- Excellent knowledge of Linux command-line tools and ability to write bash scripts.
- Strong in Unix / Linux administration and management,
KEY ROLES/RESPONSIBILITIES:
- Own and manage the entire cloud infrastructure
- Create the entire CI/CD pipeline to build and release
- Explore new technologies and tools and recommend those that best fit the team and organization
- Own and manage the site reliability
- Strong decision-making skills and metric-driven approach
- Mentor and coach other team members
About Us
Ā
Zupee is Indiaās fastest-growing innovator in real money gaming with a focus on predominant skill-focused games. Started by 2 IIT-Kanpur alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures, and Smile Group.
Ā
Know more about our recent funding: https://bit.ly/3AHmSL3
Ā
Our focus has been on innovating in the board, strategy, and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play.
Ā
Location: We are location agnostic & our teams work from anywhere. Physically we are based out of Gurugram
Ā
Core Responsibilities:
Ā
- Handling all Devops activities
- Engage with development teams to document and implement best practices for our existing and new products
- Ensure reliability of services by making systems and applications stable
- Implement DevOps technologies and processes i.e. containerization, CI/CD, infrastructure as code, metrics, monitoring, etc.
- Good understanding of Terraform (or similar āInfrastructure as Codeā technologies)
- Good understanding of modern CI/CD methods and approaches
- Troubleshoot and resolve issues related to application development, deployment, and operations
- The expertise of AWS ecosystem (EC2, ECS, S3, VPC, ALB, SG)
- Experience in monitoring production systems and being proactive to ensure that outages are limited
- Develop and contribute to several infrastructure improvement projects around performance, security, autoscaling, and cost-optimization
Ā
What are we looking for:
Ā
- AWS cloud services ā Compute, Networking, Security and Database services
- Strong with Linux fundamentals
- Experience with Docker & Kubernetes in a Production Environment
- Working knowledge of Terraform/Ansible
- Working Knowledge of Prometheus and Grafana or any other monitoring system
- Good at scripting with Bash/Python/Shell
- Experience with CI/CD pipeline preferably AWS DevOps
- Strong troubleshooting and problem-solving skills
- A top-notch DevOps Engineer will demonstrate excellent leadership skills and the capacity to mentor subordinates
- Good communication and collaboration skills
- Openness to learning new tools and technologies
- Startup Experience would be a plus
DevOps EngineerĀ
Notice Period: 45 days / Immediate Joining
Ā
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Ā
Key Qualifications
Ā· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
Ā· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
Ā· Strong experience in Linux/Unix administration.
Ā· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
Ā· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
Ā· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
Ā· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
Ā· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
Ā· Professional Certificates in AWS or any other cloud is preferable
Ā· Excellent problem solving and troubleshooting skills
Ā· Must have good written and verbal communication skills
āÆ
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Ā Must be flexible to work on the office timings to accommodate the multi-national client timings.
Ā Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Ā Involve development operations & support internal teams
Ā Improve infrastructure uptime, performance, resilience, reliability through automation
Ā Willing to learn new technologies and work on research-orientated projects
Ā Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Ā Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Ā Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.comĀ
- Hands onĀ experience in following is a must:Ā Unix, Python and Shell Scripting.
- Hands on experience in creating infrastructure on cloud platformĀ AWSĀ is a must.
- Must have experience in industry standardĀ CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory and Chef.
- Must be good at these DevOps tools:
Ā Ā Ā Ā Ā Ā Ā Version Control Tools:Ā Git, CVS
Ā Ā Ā Ā Ā Ā Ā Build Tools:Ā Maven and Gradle
Ā Ā Ā Ā Ā Ā Ā CI Tools:Ā Jenkins
- Hands-on experience with Analytics tools, ELK stack.
- Knowledge of Java will be an advantage.
- Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.
- Ability to help debug and optimise code and automate routine tasks.
- Should be extremely good in communication
- Experience in dealing with difficult situations and making decisions with a sense of urgency.
- Experience in Agile and Jira will be an add on