About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
About Blue Sky Analytics
Similar jobs
Job Title: Data Architect - Azure DevOps
Job Location: Mumbai (Andheri East)
About the company:
MIRACLE HUB CLIENT, is a predictive analytics and artificial intelligence company headquartered in Boston, US with offices across the globe. We build prediction models and algorithms to solve high priority business problems. Working across multiple industries, we have designed and developed breakthrough analytic products and decision-making tools by leveraging predictive analytics, AI, machine learning, and deep domain expertise.
Skill-sets Required:
- Design Enterprise Data Models
- Azure Data Specialist
- Security and Risk
- GDPR and other compliance knowledge
- Scrum/Agile
Job Role:
- Design and implement effective database solutions and models to store and retrieve company data
- Examine and identify database structural necessities by evaluating client operations, applications, and programming.
- Assess database implementation procedures to ensure they comply with internal and external regulations
- Install and organize information systems to guarantee company functionality.
- Prepare accurate database design and architecture reports for management and executive teams.
Desired Candidate Profile:
- Bachelor’s degree in computer science, computer engineering, or relevant field.
- A minimum of 3 years’ experience in a similar role.
- Strong knowledge of database structure systems and data mining.
- Excellent organizational and analytical abilities.
- Outstanding problem solver.
- IMMEDIATE JOINING (A notice period of 1 month is also acceptable)
- Excellent English communication and presentation skills, both verbal and written
- Charismatic, competitive and enthusiastic personality with negotiation skills
Compensation: NO BAR .
What you'll be doing
Building engineering operations for smooth functioning of the platform and developing operations to oversee continuous deployment and integrations.
Crafting scripts, infra-as-code parts, deployment, CI/CD integrations and automating visualization of runtime metrics.
Monitoring and log aggregating frameworks such as Logstash, Splunk, ElastiSearch, and Kibana.
Crafting better experience implementing and designing cloud native security concepts, DevSecOps, or MLOps.
Deploy, monitor infrastructure including logging, monitoring, alerting, team chat and other tools, various websites, access keys, third-party services, and security support
We will need your…
Expertise in Docker and docker-compose
3 + years of hands-on experience of programming in languages such as Python, Ruby, Go, Swift, Java or similar object-oriented language.
Hands-on experience with microservices and distributed application architecture, such as containers, Kubernetes, and/or serverless technology.
Expertise in GitLab for SCM and CI/CD.
Why Us?
Opportunity to create and work on the problems that would democratise financial services through data - The next big thing in Fintech.
Remuneration: When we say best in the industry, we mean it! You can focus on what you do the best we take care of the compensation part.
Flexible working hours with Work From Home options.
Quarterly free online courses/certifications (relevant certifications) and Sponsorship to industry-related, tech conferences
Kick-ass benefits include comprehensive health insurance for you and your family, including parents
Options to choose your own device(from the available options).
And You will be able to work autonomously and have the freedom to innovate.
DevOps Engineer
at Sela Technology Solutions Pvt Ltd
Devops Engineer Position - 3+ years
Kubernetes, Helm - 3+ years (dev & administration)
Monitoring platform setup experience - Prometheus, Grafana
Azure/ AWS/ GCP Cloud experience - 1+ years.
Ansible/Terraform/Puppet - 1+ years
CI/CD - 3+ years
● Managing deployments on multi-cloud architectures seamlessly including AWS, Azure and GCP
● Enabling growth and scalability of the company’s infrastructure based on industry best practices and cutting edge technologies
Job Description: Dev Ops
Roles & Responsibilities:
● Manage systems on AWS infrastructure including application servers, database servers
● Proficiency with EC2, Redshift, RDS, Elasticsearch, MongoDB and other AWS services.
● Proficiency with managing a distributed service architecture with multiple microservices – including maintaining dev, QA, staging and production environments, managing zero-downtime releases, ensuring failure rollbacks with zero-downtime and scaling on-demand
● Containerization of workloads and rapid deployment
● Driving cost optimization while balancing performance
● Manage high availability of existing systems and proactively address system maintenance issues
● Manage AWS infrastructure configurations along with Application Load Balancers, HTTPS configurations, and network configurations (VPCs)
● Work with the software engineering team to automate code deployment
● Build and maintain tools for deployment, monitoring and operations. And troubleshoot and resolve issues in our dev, test and production environments.
● Familiarity with managing Spark based ETL pipelines a plus
● Experience in managing a team of DevOps Engineers
Required Qualifications:
● Bachelor’s or Master’s degree in a quantitative field
● Cloud computing experience, Amazon Web Services (AWS). Bonus if you've worked on Azure, GCP and on cost optimization.
● Prior experience in working on a distributed microservices architecture and containerization.
● Strong background in Linux/Windows administration and scripting
● Experience with CI/CD pipelines, Git, deployment configuration and monitoring tools
● Working understanding of various components for Web Architecture
● A working understanding of code and script (Javascript, Angular, Python)
● Excellent communication skills, problem-solving and troubleshooting.
Job Type: Full-time
DevOps Engineer - Product
Be part of great companies to work for and boost your career as Product DevOps Engineer.
Rishabh Software(CMMi Level 3), an India based IT service provider, focuses on cost-effective, qualitative, and timely delivered Offshore Software Development, Business Process Outsourcing (BPO) and Engineering Services.
Our Core competency lies in developing customized software solutions using web-based and client/server technology. With over 20 years of Software Development Experience working together with various domestic and international companies, we, at Rishabh Software, provide specific solutions as per the client requirements that help industries of different domains to change business problems into strategic advantages.
Product Development division is relatively new and comes with a start-up culture where long path is been and being constructed for developing reliable & scalable product/s.
Through our offices in the US (Silicon Valley), UK (London) and India (Vadodara & Bangalore) we service our global clients with qualitative and well-executed software development, BPO and Engineering services.
Key Responsibilities
- Ability to understand the product architecture and build supporting platform in any of the cloud providers like AWS, Azure and Google Cloud.
- Build different environments and Automate the release process.
- Facilitating the development process and operations.
- Understand the non functional requirements and propose the right infrastructure.
- Establishing continuous build environments to speed up software development
- Designing and implement efficient practices.
- Able to do PoCs on niche cloud services and tools. And suggest the best in class tools for the cloud infra.
- Managing and reviewing technical operations.
- Guiding the development teams
Technical Skills
Mandatory
- 5-6 years of experience in building DevOps practice for Product Organisation
- Experience with cloud platforms AWS, Google Cloud, Azure, etc.
- Experience with infrastructure as code and task automation
- Proficiency in cloud administration and automate the tasks using any script language.
- Working knowledge & hands-on experience of Dockers, Kubernetes, Terraform, Ansible, YAML
- Package and deploy software to dev/test environments in both Linux and Windows
- Experience in setting up cloud environments contains Container orchestration, Database, Docker repository, Messaging Applications, Monitoring and Audit, etc..
- Experience in building CI/CD pipelines (Release Pipelines) using AWS/Azure/Google Cloud and Jinkins
- Experience in Agile methodology
Good To Have
- Certifications in DevOps and Cloud will be added advantage
You would be part of
- Exciting journey in building next generation enterprise products
- Flat organisation structure
- Enriches both domain and technical skills
Soft Skills
- Good verbal and written communication skills
- Ability to collaborate and work effectively in a team
- Proven experience leading and mentoring a team
- Excellent analytical and logical skills
Education
- Preferred: Graduate or Post Graduate with specialization related to Computer Science or IT
Role:
- Developing a good understanding of the solutions which Company delivers, and how these link to Company’s overall strategy.
- Making suggestions towards shaping the strategy for a feature and engineering design.
- Managing own workload and usually delivering unsupervised. Accountable for their own workstream or the work of a small team.
- Understanding Engineering priorities and is able to focus on these, helping others to remain focussed too
- Acting as the Lead Engineer on a project. Helps ensure others follow Company processes, such as release and version control.
- An active member of the team, through useful contributions to projects and in team meetings.
- Supervising others. Deputising for a Lead and/or support them with tasks. Mentoring new joiners/interns and Masters students. Sharing knowledge and learnings with the team.
Requirements:
- Acquired strong proven professional programming experience.
- Strong command of Algorithms, Data structures, Design patterns, and Product Architectural Design.
- Good understanding of DevOps, Cloud technologies, CI/CD, Serverless and Docker, preferable AWS
- Proven track record and expert in one of the field - DevOps/Frontend/Backend
- Excellent coding and debugging skills in any language with command on any one programming paradigm, preferred Javascript/Python/Go
- Experience with at least one of the Database systems - RDBMS and NoSQL
- Ability to document requirements and specifications.
- A naturally inquisitive and problem-solving mindset.
- Strong experience in using AGILE or SCRUM techniques to build quality software.
- Advantage: experience in React js, AWS, Nodejs, Golang, Apache Spark, ETL tool, data integration system, certification in AWS, worked in a Product company and involved in making it from scratch, Good communication skills, open-source contributions, proven competitive coding pro
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
- As a DevOps Engineer, you need to have strong experience in CI/CD pipelines.
- Setup development, testing, automation tools, and IT infrastructure
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Selecting and deploying appropriate CI/CD tools
- Deploy and maintain CI/CD pipelines across multiple environments (Mobile, Web API’s & AIML)
Required skills & experience:
- 3+ years of experience as DevOps Engineer and strong working knowledge in CI/CD pipelines
- Experience administering and deploying development CI/CD using Git, BitBucket, CodeCommit, Jira, Jenkins, Maven, Gradle, etc
- Strong knowledge in Linux-based infrastructures and AWS/Azure/GCP environment
- Working knowledge on AWS (IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, etc)
- Experience with Docker containerization and clustering (Kubernetes/ECS)
- Experience on Android source(AOSP) clone, build, and automation ecosystems
- Knowledge of scripting languages such as Python, Shell, Groovy, Bash, etc
- Familiar with Android ROM development and build process
- Knowledge of Agile Software Development methodologies
Devops Tech Lead
at Global internet of things connected solutions provider(H1)
- Work with developers to build out CI/CD pipelines, enable self-service build tools and reusable deployment jobs. Find, explore, and advocate for new technologies for enterprise use.
- Automate the provisioning of environments
- Promote new DevOps tools to simplify the build process and entire Continuous Delivery.
- Manage a Continuous Integration and Deployment environment.
- Coordinate and scale the evolving build and cloud deployment systems across all product development teams.
- Work independently, with, and across teams. Establishing smooth running. environments are paramount to your success, and happiness
- Encourage innovation, implementation of cutting-edge technologies, inclusion, outside-of-the[1]box thinking, teamwork, self-organization, and diversity.
Technical Skills
- Experience with AWS multi-region/multi-AZ deployed systems, auto-scaling of EC2 instances, CloudFormation, ELBs, VPCs, CloudWatch, SNS, SQS, S3, Route53, RDS, IAM roles, security groups, cloud watch
- Experience in Data Visualization and Monitoring tools such as Grafana and Kibana
- Experienced in Build and CI/CD/CT technologies like GitHub, Chef, Artifactory, Hudson/Jenkins
- Experience with log collection, filter creation, and analysis, builds, and performance monitoring/tuning of infrastructure.
- Automate the provisioning of environments pulling strings with Puppet, cooking up some recipes with Chef, or through Ansible, and the deployment of those environments using containers, like Docker or Rocket: (have at least some configuration management tool through some version control).
Qualifications:
- B.E/ B.Tech/ M.C.A in Computer Science, Electronics and Communication Engineering, Electronics and Electrical Engineering.
- Minimum 60% in Graduation and Post-Graduation.
- Good verbal and written communication skills
PRAXINFO Hiring DevOps Engineer.
Position : DevOps Engineer
Job Location : C.G.Road, Ahmedabad
EXP : 1-3 Years
Salary : 40K - 50K
Required skills:
⦿ Good understanding of cloud infrastructure (AWS, GCP etc)
⦿ Hands on with Docker, Kubernetes or ECS
⦿ Ideally strong Linux background (RHCSA , RHCE)
⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)
⦿ Microservice architectures
⦿ Experience with distributed systems and highly scalable systems
⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)
⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.
If anyone is interested than share your resume at hiring at praxinfo dot com!
#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible