29+ AWS CloudFormation Jobs in Bangalore (Bengaluru) | AWS CloudFormation Job openings in Bangalore (Bengaluru)
Apply to 29+ AWS CloudFormation Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS CloudFormation Job opportunities across top companies like Google, Amazon & Adobe.
Apprication Pvt Ltd -Work form office -Goregoan(E)
DevOps Engineer:
Should Have, Preferred 1-4YRS
Automate the deployment process from Bitbucket to servers.
Implement autoscaling servers.
Create containers and server images.
Possess comprehensive AWS server knowledge beyond just EC2 instances.
Elastic Load Balancing (ELB) and Amazon Route 53 for traffic management.
Amazon RDS and DynamoDB for database management.
Utilize a broad range of AWS services beyond EC2, including but not limited to.
AWS IAM for secure access management.
Optimize AWS infrastructure for security, scalability, and cost-effectiveness.
Jenkins, or similar CI/CD tools.
Ensure seamless and reliable code deployments with minimal downtime.
Deploy and manage infrastructure using IaC tools such as Terraform, AWS CloudFormation, or Ansible.
Implement autoscaling solutions to automatically adjust server capacity based on demand, ensuring optimal performance and cost-efficiency.
Create, manage, and optimize container images using Docker.
Deploy and orchestrate containers using Kubernetes or other orchestration tools to ensure scalability.
Implement comprehensive monitoring solutions using tools like Prometheus, Grafana, ELK Stack, etc.
Regards
HR
What You Can Expect from Us:
Here at Nomiso, we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great!
Position Overview:
The Principal Cloud Network Engineer is a key interface to Client teams and is responsible to develop convincing technical solutions. This requires them to work closely with clients for multiple partner-vendors teams to architect the solution.
This position requires sound technical knowledge, proven business acumen and differentiating client interfacing ability. You are required to anticipate, create, and define an innovative solution which matches customer’s needs and the clients tactical and strategic requirements.
Roles and Responsibilities:
- Design and implement next-generation networking technologies
- Deploy/support large-scale production network
- Track, analyze, and trend capacity on the broadcast network and datacenter infrastructure
- Provide Tier 3 escalated network support
- Perform fault management and problem resolution
- Work closely with other departments, vendors, and service providers
- Perform network change management, support modifications, and maintenance
- Perform network upgrade, maintenance, and repair work
- Lead implementation of new systems
- Perform capacity planning and management
- Suggest opportunities for improvement
- Create and support network management objectives, policies, and procedures
- Ensure network documentation is kept up-to-date
- Train and assist junior engineers.
Must Have Skills:
Candidates with overall 10+ years of experience in the following:
- Hands-on: Routers/Switches, Firewalls (Palo Alto or similar), Load Balancer (RTM, GTM), AWS (VPC , API Gateway , Cloudfront , Route53, CloudVAN, Directconnect, Privatelink, Transit Gateway ) Networking, Wireless.
- Strong hands-on coding/scripting experience in one or more programming languages such as Python, Golang, Java, Bash, etc.
- Networking technologies: Routing Protocols (BGP, EIGRP & OSPF, VRFs, VLANs, VRRP, LACP, MLAG, TACACS / Rancid / GIT, IPSec VPN, DNS / DHCP, NAT / SNAT, IP Multicast, VPC, Transit Gateway, NAT Gateway, ALB/ELB), Security Groups, ACL, HSRP, VRRP, SNMP, DHCP.
- Managing hardware, IOS, coordinating with vendors/partners for support.
- Managing CDN, Links, VPN technologies, SDN/Cisco ACI ( Design and implementaion ) and Network Function Virtualization (NFV).
- Reviewing technology designs, and architecture, taking local and regional regulatory requirements into account for Voice, Video Solutions, Routing, Switching, VPN, LAN, WAN, Network Security, Firewalls, NGFW, NAT, IPS, Botnet, Application Control, DDoS, Web Filtering.
- Palo Alto Firewall / Panorama, Big-IQ, and NetBrain tools/technology standards to daily support and enhance performance, improve reliability .
- Creating a real-time contextual living map of Client’s network with detailed network specifications, including diagrams, equipment configurations with defined standards
- Improve the reliability of the service, bring in proactiveness to identify and prevent impact to customers by eliminating Single Point of Failure (SPOF).
- Capturing critical forensic data, and providing complete visibility across the enterprise, for security incidents as soon as a threat is detected, by implementing tools like NetBrain.
Good to Have Skills:
- Industry certifications on Switching, Routing and Security.
- Elastic Load Balancing (ELB), DNS / DHCP, IPSec VPN,Multicast, TACACS / Rancid / GIT, ALB/ELB
- AWS Control Tower
- Experience leading a team of 5 or more.
- Strong Analytical and Problem Solving Skills.
- Experience implementing / maintaining Infrastructure as Code (IaC)
- Certifications : CCIE, AWS Certified Advanced Networking
- Java
- Spring Boot
- Database (Preferably Mysql)
- Multithreading
- Low Level design (Any Module)
- Github
- Leetcode
- data structure
Position: Java Developer
Experience: 3-8 Years
Location: Bengaluru
We are a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms. We are looking for an enthusiastic and self-driven Test Engineer to join our team.
Roles and Responsibilities:
- Expert level Micro Web Services development skills using Java/J2EE/Spring
- Strong in SQL and noSQL databases (mySQL / MongoDB preferred) Ability to develop software programs with best of design patterns , data Structures & algorithms
- Work in very challenging and high performance environment to clearly understand and provide state of the art solutions ( via design and code)
- Ability to debug complex applications and help in providing durable fixes
- While Java platform is primary, ability to understand, debug and work on other application platforms using Ruby on Rails and Python
- Responsible for delivering feature changes and functional additions that handle millions of requests per day while adhering to quality and schedule targets
- Extensive knowledge of at least 1 cloud platform (AWS, Microsoft Azure, GCP) preferably AWS.
- Strong unit testing skills for frontend and backend using any standard framework
- Exposure to application gateways and dockerized. microservices
- Good knowledge and experience with Agile, TDD or BDD methodologies
Desired Profile:
- Programing language – Java
- Framework – Spring Boot
- Good Knowledge of SQL & NoSQL DB
- AWS Cloud Knowledge
- Micro Service Architecture
Good to Have:
- Familiarity with Web Front End (Java Script/React)
- Familiarity with working in Internet of Things / Hardware integration
- Docker & Kubernetes Serverless Architecture
- Working experience in Energy Company (Solar Panels + Battery)
Confidential
About this roleWe are seeking an experienced MongoDB Developer/DBA who will be
responsible for maintaining MongoDB databases while optimizing performance, security, and
the availability of MongoDB clusters. As a key member of our team, you’ll play a crucial role in
ensuring our data infrastructure runs smoothly.
You'll have the following responsibilities
Maintain and Configure MongoDB Instances - Responsible for build, design, deploy,
maintain, and lead the MongoDB Atlas infrastructure. Keep clear documentation of the
database setup and architecture.
Ownership of governance, defining and enforcing policies in MongoDB Atlas.Provide
consultancy in drawing the design and infrastructure (MongoDB Atlas) for use case.
Service and Governance wrap will be in place to restrict over provisioning for server size,
number of clusters per project and scaling through MongoDB Atlas
Gathering and documenting detailed business requirements applicable to the data
layer.Responsible for designing, configuring and managing MongoDB on Atlas.
Design, develop, test, document, and deploy high-quality technical solutions on the
MongoDB Atlas platform based on industry best practices to solve business needs.
Resolves technical issues raised by the team and/or customer and manages escalations as
required.
Migrate data from on-premise MongoDB and RDBMS to MongoDB AtlasCommunicate
and collaborate with other technical resources and customers in providing timely updates
on status of deliverables, shedding light on technical issues, and obtaining buy-in on
creative solutions.
Write procedures for backup and disaster recovery.
You'll have the following skills & experience
Excellent analytical, diagnostic skills, and problem-solving skills
Should understand the Database concept and develop expertise in designing and
developing NoSQL databases such as MongoDB
MongoDB query operation, import and export operation in database
Experience in ETL methodology for performing Data Migration, Extraction,
Transformation, Data Profiling and Loading
Migrating database by ETL, migrating database by manual process and designing,
development, implementation
General networking skills, especially in the context of a public cloud (e.g. AWS – VPC,
subnets, routing tables, nat / internet gateways, dns, security groups)
Experience using Terraform as an IaC tool for setting up infrastructure on AWS
CloudPerforming database backups and recovery
Competence in at least one of the following languages (in no particular order): Java, C++,
C#, Python, Node.js (JavaScript), Ruby, Perl, Scala, Go
Excellent communication skills, often being able to compromise but draw out risks and
constraints associated with solutions. Be able to work independently and collaborate with
other teams
Proficiency in configuring schema and MongoDB data modeling.
Strong understanding of SQL and NoSQL databases.
Comfortable with MongoDB syntax.
Experience with database security management.
Performance Optimization - Ensure databases achieve maximum performance and
availability. Design effective indexing strategies.
We are Seeking:
1. AWS Serverless, AWS CDK:
Proficiency in developing serverless applications using AWS Lambda, API Gateway, S3, and other relevant AWS services.
Experience with AWS CDK for defining and deploying cloud infrastructure.
Knowledge of serverless design patterns and best practices.
Understanding of Infrastructure as Code (IaC) concepts.
Experience in CI/CD workflows with AWS CodePipeline and CodeBuild.
2. TypeScript, React/Angular:
Proficiency in TypeScript.
Experience in developing single-page applications (SPAs) using React.js or Angular.
Knowledge of state management libraries like Redux (for React) or RxJS (for Angular).
Understanding of component-based architecture and modern frontend development practices.
3. Node.js:
Strong proficiency in backend development using Node.js.
Understanding of asynchronous programming and event-driven architecture.
Familiarity with RESTful API development and integration.
4. MongoDB/NoSQL:
Experience with NoSQL databases and their use cases.
Familiarity with data modeling and indexing strategies in NoSQL databases.
Ability to integrate NoSQL databases into serverless architectures.
5. CI/CD:
Ability to troubleshoot and debug CI/CD pipelines.
Knowledge of automated testing practices and tools.
Understanding of deployment automation and release management processes.
Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field.
Certification(Preferred-Added Advantage):AWS certifications (e.g., AWS Certified Developer - Associate)
Company Description
KogniVera is an India-based technology consulting and services company that specializes in conceptualization, design, engineering, and management of digital products. The company brings rich experience and expertise to address the growth needs of enterprises in dynamic industries such as Retail, Financial Services, Insurance, and Healthcare. KogniVera has an unwavering obsession with customer success and a partnership mindset dedicated to achieving unparalleled success in the digital landscape.
Role Description
This is a full-time on-site Java Spring Boot Lead role located in Bangalore. The Java Spring Boot Lead will also collaborate with cross-functional teams and stakeholders to identify, design, and implement new features and functionality.
Fulltime role
Location: Bengaluru-Onsite, India fulltime, Onsite.
Skills Set Required: 5+ years of experience.
Very Strong at core java.
Having sound knowledge in spring boot.
Hands on experience in creating framework.
Cloud knowledge.
Good understanding of Design patterns.
Must have worked on at least 2-3 project.
Please share your updated resume and the details.
mgarg@#kognivera.com
Website : https://kognivera.com
Job Summary
Cloud Production Support Engineer(PSE) is responsible for fulfilling the day-to-day infrastructure and service requests from the application teams across AWS, CI/CD solutions and observability tools. You will be expected to handle production issues in collaboration with the cloud Infrastructure and application teams.
Responsibilities and Duties
- Troubleshoot production Issues: When technical issues with the cloud infrastructure components arise, PSE must act quickly to analyse the available data and find the root cause of the problem. They may then develop a solution or escalate the problem to other engineering team members while providing stakeholders with progress updates.
- Infrastructure provisioning and modification: Application teams may request to create new infrastructure or modify the existing ones in AWS based on their requirements via the ticketing tool. PSE should ensure that the required data/info is available on the ticket and provide a resolution based on the given SLA.
- Alert Management: Alerts from the observability tools will be received on multiple channels according to the notification settings. PSEs are expected to acknowledge the alerts, troubleshoot the issue, close the alert based on the given SLA, or escalate to the cloud infra/DevOps team for further diagnosis.
- Onboarding, Off-boarding and access management: Whenever an employee joins or leaves the organization, you will receive an onboarding or offboarding request.
- Prepare Technical Documentation: PSEs must prepare documentation when logging product issues, as they must note all details, including their observations, diagnoses, and action steps. Other everyday tasks include weekly reports summarising production performance, upgrade release notes, and troubleshooting guides.
- Product Improvements: Since PSEs have good exposure to the product issues, they should work closely with the PMs+EMs, pass the feedback on the product, and get the improvements/fixes included in the product roadmap.
- Adherence to SLA and timelines: PSEs should always adhere to the timelines shared with other teams for closure of fixes and deliver outcomes as per the SLA guidance agreed with business teams
- Reporting: Report & track weekly regarding SLA metrics, tickets being worked and closed by PSEs/transferred tickets. Identify and devise how productivity can be captured at the individual level and report the same monthly.
Qualifications and Skills
- Degree in Computer Science/Information Technology.
- Two years or more experience in Cloud and system administration.
- Experience troubleshooting in complex environments using monitoring tools.
- Demonstrated experience with containerisation technologies (Docker, Kubernetes, etc.)
- Hands-on experience with the most common AWS services.
- Implementing, maintaining, monitoring and supporting the IT infrastructure, Candidate preferably should have balanced knowledge and experience between service and software development
- Should have Hands-on experience with Azure and AWS cloud hosting and deployments. Need to know the Cloud computing concepts in Azure and AWS
- Should have experience on Microsoft SQL Server and MySQL for deployments and DB maintenance (need to know taking DB backup and scheduling jobs)
- Hands on experience in Linux, Jenkins (deployment automation) and Github (Repository) is a must
- Required Programming skills on Dot Net, SQL Server and should have good knowledge on DevOps and Candidate should contribute Improving monitoring logic, Deployment and Builds
• 3+ Years of experience as a Go Developer
• Experience in ReactJS (most preferred) or AngularJS similar front end frameworks
• Experience with the Python or/and Golang (preferably both), SQL, and design/architectural
patterns
• Experience in Java or dotnet or other opensource technologies is an added advantage
• Hands-on experience on SQL, Query optimization, DB server migration
• Preferably experience in Postgre SQL or MySQL
• Knowledge of NOSQL databases will be an added advantage
• Experience in Cloud platforms like AWS, Azure with knowledge of containerization, Kubernetes is an
added advantage
• Knowledge of one or more programming languages along with HTML5/CSS3,Bootstrap
• Familiarity with architecture styles/APIs (REST, RPC)
• Understanding of Agile methodologies
• Experience with Threading, Multithreading and pipelines
• Experience in creating RESTful API’s With Golang or Python or Java in JSON, XMLs
• Experience with GitHub, Tortoise SVN Version Control
• Strong attention to detail
• Strong knowledge of asynchronous programming with the latest frameworks
• Excellent troubleshooting and communication skills
• Strong knowledge of unit testing frameworks
• Proven knowledge of ORM techniques
• Skill for writing reusable libraries Understanding of fundamental design principles for building a scalable
applicatio
Job Responsibilities:
Section 1 -
- Responsible for managing and providing L1 support to Build, design, deploy and maintain the implementation of Cloud solutions on AWS.
- Implement, deploy and maintain development, staging & production environments on AWS.
- Familiar with serverless architecture and services on AWS like Lambda, Fargate, EBS, Glue, etc.
- Understanding of Infra as a code and familiar with related tools like Terraform, Ansible Cloudformation etc.
Section 2 -
- Managing the Windows and Linux machines, Kubernetes, Git, etc.
- Responsible for L1 management of Servers, Networks, Containers, Storage, and Databases services on AWS.
Section 3 -
- Timely monitoring of production workload alerts and quick addressing the issues
- Responsible for monitoring and maintaining the Backup and DR process.
Section 4 -
- Responsible for documenting the process.
- Responsible for leading cloud implementation projects with end-to-end execution.
Qualifications: Bachelors of Engineering / MCA Preferably with AWS, Cloud certification
Skills & Competencies
- Linux and Windows servers management and troubleshooting.
- AWS services experience on CloudFormation, EC2, RDS, VPC, EKS, ECS, Redshift, Glue, etc. - AWS EKS
- Kubernetes and containers knowledge
- Understanding of setting up AWS Messaging, streaming and queuing Services(MSK, Kinesis, SQS, SNS, MQ)
- Understanding of serverless architecture. - High understanding of Networking concepts
- High understanding of Serverless architecture concept - Managing to monitor and alerting systems
- Sound knowledge of Database concepts like Dataware house, Data Lake, and ETL jobs
- Good Project management skills
- Documentation skills
- Backup, and DR understanding
Soft Skills - Project management, Process Documentation
Ideal Candidate:
- AWS certification with between 2-4 years of experience with certification and project execution experience.
- Someone who is interested in building sustainable cloud architecture with automation on AWS.
- Someone who is interested in learning and being challenged on a day-to-day basis.
- Someone who can take ownership of the tasks and is willing to take the necessary action to get it done.
- Someone who is curious to analyze and solve complex problems.
- Someone who is honest with their quality of work and is comfortable with taking ownership of their success and failure, both.
Behavioral Traits
- We are looking for someone who is interested to be part of creativity and the innovation-based environment with other team members.
- We are looking for someone who understands the idea/importance of teamwork and individual ownership at the same time.
- We are looking for someone who can debate logically, respectfully disagree, and can admit if proven wrong and who can learn from their mistakes and grow quickly
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.
Roles and Responsibilities:
- Work closely with a cross functional team to serve business goals and objectives.
- Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
- Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform.
- Build and maintain code, tools, packages in cloud
Requirements:
- At Least 2+ years of experience in Data engineering
- At Least 3+ yr experience in Python with familiarity in popular ML libraries.
- At Least 2+ years experience in model serving and pipelines
- Working knowledge of containers like kubernetes , dockers, in AWS
- Design distributed systems deployment at scale
- Hands-on experience in coding and scripting
- Ability to write effective scalable and modular code.
- Familiarity with Git workflows, CI CD and NoSQL Mongodb
- Familiarity with Airflow, DVC and MLflow is a plus
Why you should join us
- You will join the mission to create positive impact on millions of peoples lives
- You get to work on the latest technologies in a culture which encourages experimentation - You get to work with super humans (Psst: Look up these super human1, super human2, super human3, super human4)
- You get to work in an accelerated learning environment
What you will do
- You will provide deep technical expertise to your team in building future ready systems.
- You will help develop a robust roadmap for ensuring operational excellence
- You will setup infrastructure on AWS that will be represented as code
- You will work on several automation projects that provide great developer experience
- You will setup secure, fault tolerant, reliable and performant systems
- You will establish clean and optimised coding standards for your team that are well documented
- You will set up systems in a way that are easy to maintain and provide a great developer experience
- You will actively mentor and participate in knowledge sharing forums
- You will work in an exciting startup environment where you can be ambitious and try new things :)
You should apply if
- You have a strong foundation in Computer Science concepts and programming fundamentals
- You have been working on cloud infrastructure setup, especially on AWS since 8+ years
- You have set up and maintained reliable systems that operate at high scale
- You have experience in hardening and securing cloud infrastructures
- You have a solid understanding of computer networking, network security and CDNs
- Extensive experience in AWS, Kubernetes and optionally Terraform
- Experience in building automation tools for code build and deployment (preferably in JS)
- You understand the hustle of a startup and are good with handling ambiguity
- You are curious, a quick learner and someone who loves to experiment
- You insist on highest standards of quality, maintainability and performance
- You work well in a team to enhance your impact
- Hands-on experience building database-backed web applications using Python based frameworks
- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments
- Experience building client-side and server-side API-level integrations in Python
- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.
- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)
- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.
- Experience in using project management tools like Jira, Azure DevOps etc.
- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
Leading IT MNC Company
Rules & Responsibilities:
- Design, implement and maintain all AWS infrastructure and services within a managed service environment
- Should be able to work on 24 X 7 shifts for support of infrastructure.
- Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
- Manage the production deployment and deployment automation
- Implement process and quality improvements through task automation
- Institute infrastructure as code, security automation and automation or routine maintenance tasks
- Experience with containerization and orchestration tools like docker, Kubernetes
- Build, Deploy and Manage Kubernetes clusters thru automation
- Create and deliver knowledge sharing presentations and documentation for support teams
- Learning on the job and explore new technologies with little supervision
- Work effectively with onsite/offshore teams
Qualifications:
- Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
- Experience in designing, implementing, and maintaining all AWS infrastructure and services
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
- Hands-on technical expertise in Security Architecture, automation, integration, and deployment
- Familiarity with compliance & security standards across the enterprise IT landscape
- Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
- Solid understanding of AWS IAM Roles and Policies
- Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
- Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
- Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
- Experience in managing and working with the offshore teams
- Familiarity with CI/CD systems such as Jenkins, GitLab CI
- Scripting experience (Python, Bash, etc.)
- AWS, Kubernetes Certification is preferred
- Ability to work with and influence Engineering teams
AI-powered cloud-based SaaS solution provider
● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
A global business process management company
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
Job Summary
Creates, modifies, and maintains software applications individually or as part of a team. Provides technical leadership on a team, including training and mentoring of other team members. Provides technology and architecture direction for the team, department, and organization.
Essential Duties & Responsibilities
- Develops software applications and supporting infrastructure using established coding standards and methodologies
- Sets example for software quality through multiple levels of automated tests, including but not limited to unit, API, End to End, and load.
- Self-starter and self-organized - able to work without supervision
- Develops tooling, test harnesses and innovative solutions to understand and monitor the quality of the product
- Develops infrastructure as code to reliably deploy applications on demand or through automation
- Understands cloud managed services and builds scalable and secure applications using them
- Creates proof of concepts for new ideas that answer key questions of feasibility, desirability, and viability
- Work with other technical leaders to establish coding standards, development best practices and technology direction
- Performs thorough code reviews that promote better understanding throughout the team
- Work with architects, designers, business analysts and others to design and implement high quality software solutions
- Builds intuitive user interfaces with the end user persona in mind using front end frameworks and styling
- Assist product owners in backlog grooming, story breakdown and story estimation
- Collaborate and communicate effectively with team members and other stakeholders throughout the organization
- Document software changes for use by other engineers, quality assurance and documentation specialists
- Master the technologies, languages, and practices used by the team and project assigned
- Train others in the technologies, languages, and practices used by the team
- Trouble shoot, instrument and debug existing software resolving root causes of defective behavior
- Guide the team in setting up the infrastructure in the cloud.
- Setup the security protocols for the cloud infrastructure
- Works with the team in setting up the data hub in the cloud
- Create dashboards for the visibility of the various interactions between the cloud services
- Other duties as assigned
Experience
Education
- BA/BS in Computer Science, a related field or equivalent work experience
Minimum Qualifications
- Mastered advanced programming concepts, including object oriented programming
- Mastered technologies and tools utilized by team and project assigned
- Able to train others on general programming concepts and specific technologies
- Minimum 8 years’ experience developing software applications
Skills/Knowledge
- Must be expert in advanced programming skills and database technology
- Must be expert in at least one technology and/or language and proficient in multiple technologies and languages:
- (Specific languages needed will vary based on development department or project)
- .Net Core, C#, Java, SQL, JavaScript, Typescript, Python
- Additional desired skills:
- Single-Page Applications, Angular (v9), Ivy, RXJS, NGRX, HTML5, CSS/SASS, Web Components, Atomic Design
- Test First approach, Test Driven Development (TDD), Automated testing (Protractor, Jasmine), Newman Postman, artillery.io
- Microservices, Terraform, Jenkins, Jupyter Notebook, Docker, NPM, Yarn, Nuget, NodeJS, Git/Gerrit, LaunchDarkly
- Amazon Web Services (AWS), Lambda, S3, Cognito, Step Functions, SQS, IAM, Cloudwatch, Elasticache
- Database Design, Optimization, Replication, Partitioning/Sharding, NoSQL, PostgreSQL, MongoDB, DynamoDB, Elastic Search, PySpark, Kafka
- Agile, Scrum, Kanban, DevSecOps
- Strong problem-solving skills
- Outstanding communications and interpersonal skills
- Strong organizational skills and ability to multi-task
- Ability to track software issues to successful resolution
- Ability to work in a collaborative fast paced environment
- Setting up complex AWS data storage hub
- Well versed in setting up infrastructure security in the interactions between the planned components
- Experienced in setting up dashboards for analyzing the various operations in the AWS infra setup.
- Ability to learn new development language quickly and apply that knowledge effectively
Requirement:
Qualification - BE / B-tech (CS/EEE/ECE/IT) • You hold a bachelor's or master's degree in software engineering/IT • Ideally you have a first professional knowledge of building production quality cloud solutions. • Familiar with AWS, AWS Lambda, AWS Cloud Formation, AWS Cloud Watch and AWS IoT Greengrass. • You have experience with NoSQL Databases and designing REST APIs. The following languages are used: Python, Node.js, JavaScript. • Proficient in Data Structures and Algorithms. • A team player who likes to share with & learn from other colleagues
As a Scala Developer, you are part of the development of the core applications using the Micro Service paradigm. You will join an Agile team, working closely with our product owner, building and delivering a set of Services as part of our order management and fulfilment platform. We deliver value to our business with every release, meaning that you will immediately be able to contribute and make a positive impact.
Our approach to technology is to use the right tool for the job and, through good software engineering practices such as TDD and CI/CD, to build high-quality solutions that are built with a view to maintenance.
Requirements
The Role:
- Build high-quality applications and HTTP based services.
- Work closely with technical and non-technical colleagues to ensure the services we build meet the needs of the business.
- Support development of a good understanding of business requirements and corresponding technical specifications.
- Actively contribute to planning, estimation and implementation of team work.
- Participate in code review and mentoring processes.
- Identify and plan improvements to our services and systems.
- Monitor and support production services and systems.
- Keep up with industry trends and new tools, technologies & development methods with a view to adopting best practices that fit the team and promote adoption more widely.
Relevant Skills & Experience:
The following skills and experience are relevant to the role and we are looking for someone who can hit the ground running in these areas.
- Web service application development in Scala (essential)
- Functional Programming (essential)
- API development and microservice architecture (essential)
- Patterns for building scalable, performant, distributed systems (essential)
- Databases – we use PostgreSQL (essential)
- Common libraries – we use Play, Cats and Slick (essential)
- Strong communication and collaboration skills (essential)
- Performance profiling and analysis of JVM based applications
- Messaging frameworks and patterns
- Testing frameworks and tools
- Docker, virtualisation and cloud computing – we use AWS and Vmware
- Javascript including common frameworks such as React, Angular, etc
- Linux systems administration
- Configuration tooling such as Puppet and Ansible
- Continuous delivery tools and environments
- Agile software delivery
- Troubleshooting and diagnosing complex production issues
Benefits
- Fun, happy and politics-free work culture built on the principles of lean and self organisation.
- Work with large scale systems powering global businesses.
- Competitive salary and benefits.
Note: We looking for immediate joiners. We expect the offered candidate should join within 15 days. Buyout reimbursement is available for 30 to 60 days notice period applicants who can ready join within 15 days.
To build on our success, we are looking for smart, conscientious software developers who want to work in a friendly, engaging environment and take our platform and products forward. In return, you will have the opportunity to work with the latest technologies, frameworks & methodologies in service development in an environment where we value collaboration and learning opportunities.
Job Title: |
Senior Cloud Infrastructure Engineer (AWS) |
||
Department & Team |
Technology |
Location: |
India /UK / Ukraine |
Reporting To: |
Infrastructure Services Manager |
Role Purpose: |
The purpose of the role is to ensure high systems availability across a multi-cloud environment, enabling the business to continue meeting its objectives.
This role will be mostly AWS / Linux focused but will include a requirement to understand comparative solutions in Azure.
Desire to maintain full hands-on status but to add Team Lead responsibilities in future
Client’s cloud strategy is based around a dual vendor solutioning model, utilising AWS and Azure services. This enables us to access more technologies and helps mitigate risks across our infrastructure.
The Infrastructure Services Team is responsible for the delivery and support of all infrastructure used by Client twenty-four hours a day, seven days a week. The team’s primary function is to install, maintain, and implement all infrastructure-based systems, both On Premise and Cloud Hosted. The Infrastructure Services group already consists of three teams:
1. Network Services Team – Responsible for IP Network and its associated components 2. Platform Services Team – Responsible for Server and Storage systems 3. Database Services Team – Responsible for all Databases
This role will report directly into the Infrastructure Services Manager and will have responsibility for the day to day running of the multi-cloud environment, as well as playing a key part in designing best practise solutions. It will enable the Client business to achieve its stated objectives by playing a key role in the Infrastructure Services Team to achieve world class benchmarks of customer service and support.
|
Responsibilities: |
Operations · Deliver end to end technical and user support across all platforms (On-premise, Azure, AWS) · Day to day, fully hands-on OS management responsibilities (Windows and Linux operating systems) · Ensure robust server patching schedules are in place and meticulously followed to help reduce security related incidents. · Contribute to continuous improvement efforts around cost optimisation, security enhancement, performance optimisation, operational efficiency and innovation. · Take an ownership role in delivering technical projects, ensuring best practise methods are followed. · Design and deliver solutions around the concept of “Planning for Failure”. Ensure all solutions are deployed to withstand system / AZ failure. · Work closely with Cloud Architects / Infrastructure Services Manager to identify and eliminate “waste” across cloud platforms. · Assist several internal DevOps teams with day to day running of pipeline management and drive standardisation where possible. · Ensure all Client data in all forms are backed up in a cost-efficient way. · Use the appropriate monitoring tools to ensure all cloud / on-premise services are continuously monitored. · Drive utilisation of most efficient methods of resource deployment (Terraform, CloudFormation, Bootstrap) · Drive the adoption, across the business, of serverless / open source / cloud native technologies where applicable. · Ensure system documentation remains up to date and designed according to AWS/Azure best practise templates. · Participate in detailed architectural discussions, calling on internal/external subject matter experts as needed, to ensure solutions are designed for successful deployment. · Take part in regular discussions with business executives to translate their needs into technical and operational plans. · Engaging with vendors regularly in terms of verifying solutions and troubleshooting issues. · Designing and delivering technology workshops to other departments in the business. · Takes initiatives for improvement of service delivery. · Ensure that Client delivers a service that resonates with customer’s expectations, which sets Client apart from its competitors. · Help design necessary infrastructure and processes to support the recovery of critical technology and systems in line with contingency plans for the business. · Continually assess working practices and review these with a view to improving quality and reducing costs. · Champions the new technology case and ensure new technologies are investigated and proposals put forward regarding suitability and benefit. · Motivate and inspire the rest of the infrastructure team and undertake necessary steps to raise competence and capability as required. · Help develop a culture of ownership and quality throughout the Infrastructure Services team.
|
Skills & Experience: |
· AWS Certified Solutions Architect – Professional - REQUIRED · Microsoft Azure Fundamentals AZ-900 – REQUIRED AS MINIMUM AZURE CERT · Red Hat Certified Engineer (RHCE ) - REQUIRED · Must be able to demonstrate working knowledge of designing, implementing and maintaining best practise AWS solutions. (To lesser extend Azure) · Proven examples of ownership of large AWS project implementations in Enterprise settings. · Experience managing the monitoring of infrastructure / applications using tools including CloudWatch, Solarwinds, New Relic, etc. · Must have practical working knowledge of driving cost optimisation, security enhancement and performance optimisation. · Solid understanding and experience of transitioning IaaS solutions to serverless technology · Must have working production knowledge of deploying infrastructure as code using Terraform. · Need to be able to demonstrate security best-practise when designing solutions in AWS. · Working knowledge around optimising network traffic performance an delivering high availability while keeping a check on costs. · Working experience of ‘On Premise to Cloud’ migrations · Experience of Data Centre technology infrastructure development and management · Must have experience working in a DevOps environment · Good working knowledge around WAN connectivity and how this interacts with the various entry point options into AWS, Azure, etc. · Working knowledge of Server and Storage Devices · Working knowledge of MySQL and SQL Server / Cloud native databases (RDS / Aurora) · Experience of Carrier Grade Networking - On Prem and Cloud · Experience in virtualisation technologies · Experience in ITIL and Project management · Providing senior support to the Service Delivery team. · Good understanding of new and emerging technologies · Excellent presentation skills to both an internal and external audience · The ability to share your specific expertise to the rest of the Technology group · Experience with MVNO or Network Operations background from within the Telecoms industry. (Optional) · Working knowledge of one or more European languages (Optional)
|
Behavioural Fit: |
· Professional appearance and manner · High personal drive; results oriented; makes things happen; “can do attitude” · Can work and adapt within a highly dynamic and growing environment · Team Player; effective at building close working relationships with others · Effectively manages diversity within the workplace · Strong focus on service delivery and the needs and satisfaction of internal clients · Able to see issues from a global, regional and corporate perspective · Able to effectively plan and manage large projects · Excellent communication skills and interpersonal skills at all levels · Strong analytical, presentation and training skills · Innovative and creative · Demonstrates technical leadership · Visionary and strategic view of technology enablers (creative and innovative) · High verbal and written communication ability, able to influence effectively at all levels · Possesses technical expertise and knowledge to lead by example and input into technical debates · Depth and breadth of experience in infrastructure technologies · Enterprise mentality and global mindset · Sense of humour
|
Role Key Performance Indicators: |
· Design and deliver repeatable, best in class, cloud solutions. · Pro-actively monitor service quality and take action to scale operational services, in line with business growth. · Generate operating efficiencies, to be agreed with Infrastructure Services Manager. · Establish a “best in sector” level of operational service delivery and insight. · Help create an effective team. |
About Peppo
https://www.peppo.co.in">Peppo is a fair food ordering utility. It helps restaurants manage both the demand and the delivery sides of their business, on the cloud.
The simplest way to think of Peppo is that it is a backend for restaurants that takes them online, not on just one channel but on all of them. App publishers that integrate with Peppo, will see every restaurant that uses Peppo and can enable food ordering through their own front-end.
On the fulfilment side, Peppo allows restaurants to plug into an aggregated delivery fleet on which it helps restaurants choose between various providers to optimise between delivery costs and performance.
About the Role
We are looking for an experienced Business Development Manager with first-rate system administration and cloud systems management skills to join our retail platform team. In this role, you’ll be making some of the most significant decisions for the company. You need to have strong problem solving capabilities, be a team player and have great communication skills. You also need to be goal-oriented, have the ability to understand the core architecture and take up responsibility of product deployment and scaling. You must be a highly technical, hands-on system admin with passion for devops and scaling.
We value those with an entrepreneurial spirit and those who bring experience from established organizations. You ought to be comfortable in dealing with lots of moving pieces. You must have excellent attention to detail; and you should be flexible and comfortable to learn new technologies and systems.
The opportunity is ideal for someone early in their career who learns by doing. This is a high-exposure role and you will learn what it really looks like in the early days of a startup.
We offer a friendly, casual, collaborative working environment that is mission-driven and results-oriented. Our small office is in a great space in Bangalore. Due to the COVID situation, we are committed to having a fully remote engagement.
Responsibilities
- Collaborate with the Tech Lead on product deployment and scaling
- Responsible for communicating and reporting to the tech lead
- Setup new environments as well as maintain existing ones
- Participate in technical design and architecture reviews from the deployment and scaling point of view
- Write technical documentation
- Automate stuff
- Infrastructure as code
- Be aware of costs and skilled enough to squeeze
Skills and Requirements
We are looking at your experience not just in terms of the years you’ve clocked but the aptitude to get things done. Here’s the skills that your job would entail, so please make sure you highlight the following in any capacity you have demonstrated - professional, freelance or hobbyist.
- 2+ years of software development experience (professional, freelance or hobbyist). Github profiles matter.
- Strong proficiency in AWS products with certifications
- Deep understanding of Infrastructure as code tool like Ansible
- Understanding of tools like Gitlab, Jenkins etc required for code management and continuous delivery
- Python or Bash scripting skills
- Experience with deployment frameworks, practices and processes
- Demonstrated skills in deploying, releasing, and maintaining highly scalable web applications
- Bonus if you know AWS CloudFormation or Terraform
- Good communication skills and ability to work independently or in a team.
Employment Type
- Full-time
- Remote/On-Site
Our Preferred Resume Format
- Polished resume with list of projects clearly listed (with responsibilities you held)
- Links to interesting projects that you worked on (professional, freelance or hobbyist)
- Blogs written and any other public contributions you made
Our Interview Process
- Resume evaluation: We will screen all incoming CVs and invite candidates for next round, fitting our job profile description.
- Case study: If you are one of the shortlisted candidates, you will be sent a case study to solve.
- Phone screening: we will follow up with a phone screening round for better understanding of the CV, technical proficiency, experience etc.
- F2F Round(s): The next round is a direct face-to-face discussion over a video call.
Job Perks
- Small team with an opportunity to have a steep learning curve.
- Have a meaningful impact on end customers and sellers on their experience with digital tools.
- Work on redefining the e-commerce experience by building cooperatives for the micro-services era. This makes Peppo a perfect home for those looking to pursue compassionate capitalism.
- Train under a diverse and accomplished set of team leads, mentors and investors who have worked in government and big tech.
- Holistic development guaranteed as you will grow in an environment that prizes lateral thinking allowing you to supplement your core responsibilities with additional functions.
- Competitive compensation and flexibility to work from anywhere since Peppo uses the best of productivity tools.
Peppo is an equal opportunity employer. We're excited to work with talented and empathetic people no matter their race, caste, color, gender, sexual orientation, religion, national origin, physical disability, mental well-being, or age. Our code of conduct reflects the kind of company we strive to be, and we celebrate our diversity for that truly makes us create products that cater to the world.
• Bachelor's or Master’s degree in Computer Science or related.
• 5+ years of professional development experience.
• Deep experience in web applications, object-oriented programming, web services, REST,
Cloud computing, AWS/Azure, node.js, full-stack development
• Experience with multiple programming languages and frameworks including at least one
of JavaScript/HTML/CSS, Java, ReactJS, Python
• Experience in designing, developing and managing large scale web services
• Advance JavaScript knowledge is a must.
• Experience designing APIs and frameworks that are used by others
• Familiar with Git, Confluence, and Jira
• Exceptional problem-solving skills, with experience in defining and understanding complex
system architectures and design patterns
• Excellent communication skills. Be able to articulate technical decisions and produce
excellent technical documents
• Experience creating and maintaining unit tests and continuous integration
• Contribution to open source is a plus
• Experience developing cross-platform applications is a plus
Skill: Python, Docker or Ansible , AWS
➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes
performance and cost. plan for future infrastructure as well as Maintain & optimize existing
infrastructure.
➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like
Jenkins.
➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or
similar SaaS platforms.
Work with developers to institute systems, policies and workflows which allow for rollback of
deployments Triage release of applications to production environment on a daily basis.
➢ Interface with developers and triage SQL queries that need to be executed inproduction
environments.
➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
➢ Assist the developers and on calls for other teams with post mortem, follow up and review of
issues affecting production availability.
➢ Establishing and enforcing systems monitoring tools and standards
➢ Establishing and enforcing Risk Assessment policies and standards
➢ Establishing and enforcing Escalation policies and standards
This role is for 1 month where the person will be working from Client site in Paris to understand the system architecture and documenting them. Contract extension for this role will be purely on the performance of individual.
Since the requirement is immediate and critical, we need someone who can join us soon and travel to Paris in December
- Hands on experience handling multiple data sources/datasets
- experience in data/BI architect role
- Expert on SSIS, SSRS, SSAS
- Should have knowledge writing MDX queries
- Technical document preparation
- Should have excellent communication
- Process oriented
- Strong project management
- Should be able to think Out of the Box and provide ideas to have better solutions
- Outstanding team player with positive attitude
● Responsible for design, development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Participating in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should be able to work on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in
● an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card