
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.

About Blue Sky Analytics
About
Connect with the team
Similar jobs
e are looking for a DevOps Architect/Engineer with strong technical expertise and decent years of experience to join our IT firm. You will be responsible for facilitating the development process and operations across our organization. Besides, you should own excellent leadership skills to mentor and guide the team members.
To be successful in this job role, you should be able to streamline all DevOps practices. You should be able to review technical operations and automate the process using the right tools and techniques.
Your knowledge of the latest industry trends will prove beneficial in designing efficient practices. Moreover, you should be able to demonstrate excellent research skills and outstanding problem-solving abilities. If you possess the required skills, knowledge, and experience, then do write to us. We would be happy to meet you.
Responsibilities
- Simplifying the development process and operations
- Analyzing any setbacks or shortcomings in the operations
- Developing appropriate DevOps channels
- Setting up a continuous build environment to boost software development
- Designing and delivering efficient and comprehensive best practices
- Reviewing and managing technical operations
- Analyzing and streamlining the existing DevOps practices
- Monitoring and guiding the development team
- Using the right tools and techniques to automate technical processes
- Handling all deployment processes
- Minimizing failures and troubleshooting any technical issues
- Implementing effective DevOps solutions
- Ensuring all systems are secure and scalable
Requirements
- Bachelor’s degree in Information Technology, Computer Science or a related discipline
- Previous work experience in the IT industry
- Complete knowledge of DevOps tools like Docker, Git, Puppet, and Jenkins
- Familiarity with the latest industry trends and best practices
- Basic understanding of cloud-based environments
- Know-how of software development, testing, and configuration methodologies
- Knowledge of scripting languages like Python, JavaScript, and HTML
- Excellent analytical and research skills
- Strong leadership skills
- Ability to motivate team members
- Highly innovative individual
- Excellent problem-solving ability
- Ability to organize and manage time effectively
- Ability to work collaboratively and independently
- Good communication skills
- Ability to drive the automation process
- Demonstrating multitasking skills
- Familiarity with deployment process and tools
- Ability to maintain the confidentiality of any sensitive information
- Willingness to work in a competitive environment
We are looking for a DevOps Lead to join our team.
Responsibilities
• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment
Operations (Ops)
• Identity manual processes and automate them using various DevOps automation tools
• Maintain the organization’s growing cloud infrastructure
• Monitor and maintain DevOps environment stability
• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues
• Orchestrating builds and test setups using Docker and Kubernetes.
• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability
• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences
Requirements
• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.
• Candidate should have experience in ELK stack, Kubernetes, and Docker.
• Solid experience in the AWS environment.
• Should have experience in monitoring tools like DataDog or Newrelic.
• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.
• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.
• Candidates should demonstrate knowledge in handling distributed data systems.
Examples: Elastisearch, Cassandra, Hadoop, and others.
• Should have experience in GitLab- CIRoles and Responsibilities
- 7-10 years’ total experience, including 6+ years in a production 24/7 high-availability
- multi-site Cloud environment, including application hosting, CDN Networks, security and information protection.
- Experience of leading overall infrastructure for a complex organization and network, including 24x7 monitoring of a media website & digital properties.
- Experience in hosting and managing video streaming applications, React & Node JS based applications.
- Experience with regulatory compliance issues, as well best practices in application and network security.
- Experience in hosting services on Amazon Cloud & Google Cloud.
- Experience in performing Vulnerability Assessment at server & application level
- Experience in managing Live Streaming on digital platforms.
- Experience in managing SVN, Git code repository & Code release management.
- Partners with Technology head lead the technology infrastructure strategy and execution for the enterprise
- Planning, project management and implementation leadership, identifying opportunities for automation, cost savings, and service quality improvement.
- Provides infrastructure services vision, enables innovation and seeks to leverage market trends that can create business value consistent with the company’s requirements and expectations.
- Participate in the formulation of the company's enterprise architecture and business system plans; assessing cost and feasibility, and ensuring the plan is aligned with and supports the strategic goals of the business
- Hands-on technical depth enables direct oversight, problem-solving leadership and participation for complex infrastructure implementation, system upgrades and operational troubleshooting.
- Experience with comprehensive disaster recovery architecture and operations, including storage area network and redundant, highly-available server and network architectures.
- Leadership for delivery of 24/7 service operations and KPI compliance.
- Ensure best practices are followed for code release management & monitoring of traffic on websites & other applications.
Electrum is looking for an experienced and proficient DevOps Engineer. This role will provide you with an opportunity to explore what’s possible in a collaborative and innovative work environment. If your goal is to work with a team of talented professionals that is keenly focused on solving complex business problems and supporting product innovation with technology, you might be our new DevOps Engineer. With this position, you will be involved in building out systems for our rapidly expanding team, enabling the whole engineering group to operate more effectively and iterate at top speed in an open, collaborative environment. The ideal candidate will have a solid background in software engineering and a vivid experience in deploying product updates, identifying production issues, and implementing integrations. The ideal candidate has proven capabilities and experience in risk-taking, is willing to take up challenges, and is a strong believer in efficiency and innovation with exceptional communication and documentation skills.
YOU WILL:
- Plan for future infrastructure as well as maintain & optimize the existing infrastructure.
- Conceptualize, architect, and build:
- 1. Automated deployment pipelines in a CI/CD environment like Jenkins;
- 2. Infrastructure using Docker, Kubernetes, and other serverless platforms;
- 3. Secured network utilizing VPCs with inputs from the security team.
- Work with developers & QA team to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional, and performance status.
- Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments.
- Triage release of applications/ Hotfixes to the production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
- Assist the developers and on calls for other teams with a postmortem, follow up and review of issues affecting production availability.
- Scale Electum platform to handle millions of requests concurrently.
- Reduce Mean Time To Recovery (MTTR), enable High Availability and Disaster Recovery
PREREQUISITES:
- Bachelor’s degree in engineering, computer science, or related field, or equivalent work experience.
- Minimum of six years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructures such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail, and other services provided by AWS.
- At least 2 years of experience in building and owning serverless infrastructure.
- At least 2 years of scripting experience in Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible).
- Experience building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost.
- Experience in automating the provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have prior experience automating deployments to production and lower environments.
- Worked on providing solutions for major automation with scripts or infrastructure.
- Experience with APM tools such as DataDog and log management tools.
- Experience in designing and implementing Essential Functions System Architecture Process; establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Experience establishing and enforcing:
- 1. System monitoring tools and standards
- 2. Risk Assessment policies and standards
- 3. Escalation policies and standards
- Excellent DevOps engineering, team management, and collaboration skills.
- Advanced knowledge of programming languages such as Python and writing code and scripts.
- Experience or knowledge in - Application Performance Monitoring (APM), and prior experience as an open-source contributor will be preferred.
Azure DeVops
On premises to Azure Migration
Docker, Kubernetes
Terraform CI CD pipeline
9+ Location
Budget – BG, Hyderabad, Remote , Hybrid –
Budget – up to 30 LPA
Excellent understanding of SDLC patching, releases and software development at scale.
Excellent knowledge of Git.
Excellent knowledge of Docker.
Good understanding of enterprise standards ond enterprise building principles,
In-depth knowledge in Windows OS
Knowledge of Linux os
Theoretical and practical skills in Web-environments based on .Net technologies, e.g. Ils,
Kestrel, .Net Core, C#.
Strong scripting skills in one or any combination of CMD Shell,Bash, PowerShell. Python.
Good understanding of the mechanisms of Web-environment architectures approaches.
Strong knowledge of cloud providers offering, Azure or AWS.
Good knowledge of a configuration management tools, Ansible, Chef, Salt stack, Puppet.
(Good to have)
Good knowledge of cloud infrastructure orchestration tools like kubemetes or cloud based orchestration.
Good knowledge in one or any combination of cloud infrastructure provisioning tools like
ARM Templates, Terraform, Pulumi.
In-depth knowledge in one or any combination of software delivery orchestration tools like Azure Pipelines, Jenkins Pipelines, Octopus Deploy, etc.
Strong practical knowledge of CI Tools, ie, Azure Devops, Jenkins Excellent knowledge of Continuous Integration and Delivery approaches
Good knowledge on integration of Code Quality tools like SonarQube, Application or Container Security tool like Vera Code, Checksum, Chekov, Trivy.
In-depth knowledge on Azure DevOps Build infrastructure setup, Azure DevOps
Administration and Access management
Hammoq is an exponentially growing Startup in US and UK.
Design and implement secure automation solutions for development, testing, and production environments
-
Build and deploy automation, monitoring, and analysis solutions
-
Manage our continuous integration and delivery pipeline to maximize efficiency
-
Implement industry best practices for system hardening and configuration management
-
Secure, scale, and manage Linux virtual environments
-
Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring
-
Continuously evaluate existing systems with industry standards, and make recommendations for improvement
Desired Skills & Experiences
-
Bachelor’s or Master's degree in Computer Science, Engineering, or related field
-
Understanding of system administration in Linux environments
-
Strong knowledge of configuration management tools
-
Familiarity with continuous integration tools such as Jenkins, Travis CI, Circle CI
-
Proficiency in scripting languages including Bash, Python, and JavaScript
-
Strong communication and documentation skills
-
An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail
-
Excellent judgment, analytical thinking, and problem-solving skills
-
Full understanding of software development lifecycle best practices
-
Self-motivated individual that possesses excellent time management and organizational skills
In PM's Words
Bash scripting, Containerd(or docker), Linux Operating system basics, kubernetes, git, Jenkins ( or any pipeline management), GCP ( or idea on any cloud technology)
Linux is major..most of the people are coming from Windows.. we need Linux.. and if windows is also there it will be added advantage
There is utmost certainilty that you will be working with an amazing team...

What you do :
- Developing automation for the various deployments core to our business
- Documenting run books for various processes / improving knowledge bases
- Identifying technical issues, communicating and recommending solutions
- Miscellaneous support (user account, VPN, network, etc)
- Develop continuous integration / deployment strategies
- Production systems deployment/monitoring/optimization
-
Management of staging/development environments
What you know :
- Ability to work with a wide variety of open source technologies and tools
- Ability to code/script (Python, Ruby, Bash)
- Experience with systems and IT operations
- Comfortable with frequent incremental code testing and deployment
- Strong grasp of automation tools (Chef, Packer, Ansible, or others)
- Experience with cloud infrastructure and bare-metal systems
- Experience optimizing infrastructure for high availability and low latencies
- Experience with instrumenting systems for monitoring and reporting purposes
- Well versed in software configuration management systems (git, others)
- Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
-
Data management skills
Education :
- Degree in Computer Engineering or Computer Science
- 1-3 years of equivalent experience in DevOps roles.
- Work conducted is focused on business outcomes
- Can work in an environment with a high level of autonomy (at the individual and team level)
-
Comfortable working in an open, collaborative environment, reaching across functional.
Our Offering :
- True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
-
The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.
Perks :
- Awesome benefits, social gatherings, etc.
- Work with intelligent, fun and interesting people in a dynamic start-up environment.


You will be responsible for
1. Setting up, maintaining cloud (AWS/GCP/Azure) and kubernetes cluster and automating
their operation
2. All operational aspects of devtron platform including maintenance, upgrades,
automation.
3. Providing kubernetes expertise to facilitate smooth and fast customer onboarding on
devtron platform
Responsibilities:
1. Manage devtron platform on multiple kubernetes clusters
2. Designing and embedding industry best practices for online services including disaster
recovery, business continuity, monitoring/alerting, and service health measurement
3. Providing operational support for day to day activities involving the deployment of
services
4. Identify opportunities for improving the security, reliability, and scalability of the platform
5. Facilitate smooth and fast customer onboarding on devtron platform
6. Drive customer engagement
Requirements:
● Bachelor's Degree in Computer Science or a related field.
● 2+ years working as a devops engineer
● Proficient in 1 or more programming languages (e.g. Python, Go, Ruby).
● Familiar with shell scripts, Linux commands, network fundamentals
● Understanding of large scale distributed systems
● Basic understanding of cloud computing (AWS/GCP/Azure)
Preferred Qualifications:
● Great analytical and interpersonal skills
● Passion for creating efficient, reliable, reusable programs/scripts.
● Excited about technology, have a strong interest in learning about and playing with the
latest technologies and doing POC.
● Strong customer focus, ownership, urgency and drive.
● Knowledge and experience with cloud native tools like prometheus, kubernetes, docker,
grafana.


