11+ Reliability engineering Jobs in Pune | Reliability engineering Job openings in Pune
Apply to 11+ Reliability engineering Jobs in Pune on CutShort.io. Explore the latest Reliability engineering Job opportunities across top companies like Google, Amazon & Adobe.
Job Summary:
We are seeking a Senior DevOps & SRE Engineer to join our team and help us build, deploy, and maintain our infrastructure and applications. The ideal candidate will have experience working in a fast-paced environment and a strong background in DevOps and Site Reliability Engineering (SRE). You will be responsible for ensuring the reliability, scalability, and security of our applications and infrastructure.
Responsibilities:
- Build and maintain our CI/CD pipeline and deployment automation tools
- Design and implement monitoring and alerting systems to ensure the health of our applications and infrastructure
- Work closely with development teams to ensure that code is deployed in a reliable and scalable manner
- Participate in on-call rotations to provide 24/7 support for our production systems
- Develop and maintain disaster recovery plans and processes
- Continuously improve our infrastructure and processes to ensure scalability, reliability, and security
- Mentor and provide technical leadership to junior team members
- Keep up-to-date with industry best practices and emerging technologies in DevOps and SRE
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related field
- 5+ years of experience in DevOps or SRE
- Strong programming skills in at least one of the following languages: Python, Go, Ruby, or Java
- Experience with infrastructure as code tools such as Terraform or CloudFormation
- Experience with containerization technologies such as Docker and Kubernetes
- Strong understanding of networking concepts such as TCP/IP, DNS, and load balancing
- Experience with monitoring and logging tools such as Prometheus, Grafana, and ELK stack
- Excellent problem-solving skills and the ability to troubleshoot complex issues in a fast-paced environment
- Strong communication and collaboration skills with both technical and non-technical stakeholders
Preferred Qualifications:
- Experience with cloud providers such as AWS or Azure
- Experience with building and maintaining large-scale distributed systems
- Experience with database technologies such as MySQL, PostgreSQL, or MongoDB
- Experience with automation tools such as Ansible or Chef
- Experience with Agile development methodologies such as Scrum or Kanban
If you are passionate about DevOps and SRE and have the skills and experience we are looking for, we encourage you to apply for this exciting opportunity.
We are looking for a person who has good knowledge of digital marketing, and who is well known in SEO, SMO, and Google Ad console.
Responsibilities:
• Doing market research on trending content and creating content on the same.
• Assisting the marketing team in daily admin tasks.
• Helping in planning and organizing marketing events.
• Working on growth of website through SEO and google analytics.
• Designing multiple social campaigns considering recent trends in market.
If you are a people-person who loves the rewarding challenge of building a brand and you are a passionate enthusiast sharing our vision and having good marketing skills can apply here.
A listed product development organization
Position: Site Reliability Engineer
Location: Pune (Currently WFH, post pandemic you need to relocate)
About the Organization:
A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom, and India. You will gain work experience in a global environment.
Job Description:
We are looking for an experienced DevOps / Site Reliability engineer to join our team and be instrumental in taking our products to the next level.
In this role, you will be working on bleeding edge hybrid cloud / on-premise infrastructure handing billions of events and terabytes of data a day.
You will be responsible for working closely with various engineering teams to design, build and maintain a globally distributed infrastructure footprint.
As part of role, you will be responsible for researching new technologies, managing a large fleet of active services and their underlying servers, automating the deployment, monitoring and scaling of components and optimizing the infrastructure for cost and performance.
Day-to-day responsibilities
- Ensure the operational integrity of the global infrastructure
- Design repeatable continuous integration and delivery systems
- Test and measure new methods, applications and frameworks
- Analyze and leverage various AWS-native functionality
- Support and build out an on-premise data center footprint
- Provide support and diagnose issues to other teams related to our infrastructure
- Participate in 24/7 on-call rotation (If Required)
- Expert-level administrator of Linux-based systems
- Experience managing distributed data platforms (Kafka, Spark, Cassandra, etc) Aerospike experience is a plus.
- Experience with production deployments of Kubernetes Cluster
- Experience in automating provisioning and managing Hybrid-Cloud infrastructure (AWS, GCP and On-Prem) at scale.
- Knowledge of monitoring platform (Prometheus, Grafana, Graphite).
- Experience in Distributed storage systems such as Ceph or GlusterFS.
- Experience in virtualisation with KVM, Ovirt and OpenStack.
- Hands-on experience with configuration management systems such as Terraform and Ansible
- Bash and Python Scripting Expertise
- Network troubleshooting experience (TCP, DNS, IPv6 and tcpdump)
- Experience with continuous delivery systems (Jenkins, Gitlab, BitBucket, Docker)
- Experience managing hundreds to thousands of servers globally
- Enjoy automating tasks, rather than repeating them
- Capable of estimating costs of various approaches, and finding simple and inexpensive solutions to complex problems
- Strong verbal and written communication skills
- Ability to adapt to a rapidly changing environment
- Comfortable collaborating and supporting a diverse team of engineers
- Ability to troubleshoot problems in complex systems
- Flexible working hours and ability to participate in 24/7 on call support with other team members whenever required.
Responsibilities
Develop high quality, secure, stable code for use in products and solutions for
customers, making it easier for other developers to maintain, enhance, reuse,
and localize
Work with Product Owner and/or Product Manager/Team to understand and help
refine functional requirements for new products
Develop and outline architecture and relationships between subsystems and
participates in review of those designs
Investigate and, if necessary, prototype technologies and algorithms relating to
the task
Break down a large problem into smaller components and provide a clear
solution for each piece
Ensure new code, feature or software product meets performance goals/metrics
Research external best practices and emerging technologies for possible
incorporation into company products and methodologies
Write and maintain the code following Test-Driven-Design principles
Minimum Qualifications
Bachelor’s degree in any Engineering discipline from reputed engineering
college
5 to 7 years of strong programming skills in Web and Cloud Technologies
Should have a very strong foundation in JavaScript, and must have a clear
understanding of basic web fundamentals such as prototype based inheritance,
scopes, Event Loop, Memory Management in JS, etc
Should be good at developing modular front-end applications and is expected to
have the knowledge to create good abstractions that can be reused . Should also
have in-depth understanding of latest ES6 standards such as spread operators,
arrow functions, etc
Should have strong programming skills in Web and Cloud Technologies
Should be strong in OOAD Concepts
Should have a strong hands-on in React, Typescript
Should have Hands-on experience in any of the backend language.
Should understand latest paradigms in Front End Development such as Pub-Sub
Pattern, Redux, RxJS, Service Workers, Client Caching, Lazy Loading, Dynamic
Injection, Bundle Optimisation, etc
Should have very good hands-on knowledge of CSS3 & HTML5
Should have good knowledge of Postman API Platform
Should have experience with Amazon web services – ECS, Lambda, S3, SQS, etc
Should have Hands-on experience of developing RESTful web services and
integrating the same with heterogeneous clients
Should have Hands-on experience with Relational or NoSQL Database’s
Should have a very clear understanding of TDD is expected and should have
good exposure to writing UT's and designing UI components with testability in
mind
Should have Strong Communication and Analytical skills
Flexibility under changing conditions and the ability to multi-task between
projects
Preferred Qualifications
Hands-on experience with C++
Nice to have experience with Python
Nice to have experience with CI/CD tools like Docker, Jenkins, etc
Nice to have a hands-on & Understanding of CSS Pre-processors such as SASS,
LESS, etc
Talent Acquisition Specialist
Who are we?
“Re-imaging credit and payments from First Principles”
Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.
OneCard (Best credit card app) : http://www.getonecard.app/">www.getonecard.app
OneScore (Over 10 million downloads): http://www.onescore.app/">www.onescore.app
What you will do:
- Execute new ways of attracting and hiring tech and product talent - PAN India across various technologies.
- Manage complete recruitment life cycle (Source > Screen > Interview > Hire)
- Conduct interview assessments combining various methods including technical assessments.
- Building a healthy talent pipeline across the tech/ product domain.
- Deliver exceptional candidate experience throughout the process.
- Collaborate with the HR team, delivering on key internal SLAs.
- Ability to conduct market intelligence in sourcing candidates, mapping, negotiating offers and decision-making.
- Manage vendor relationships.
Who you should be?
- Passionate about technology, hiring and networking with people.
- Minimum 3 to 5 years of Technical recruitment experience (preferably with Product/ Fintech Companies/ startups).
- Comfortable working with a fast-paced startup environment.
- Skilled with G-Suite applications, preferably ATS experience.
- Preferably someone with a technical education background.
- Strong communicator and go-getter.
Work Location: Aundh, Pune office (the role is on-site with WFH flexibility owing to environment and business drivers)
Job Description for Drools developer -
- Hands on experience in design and development in Drools rule engine (minimum 1.5 years) and Core Java
- Exceptional ability to communicate with both technical and non-technical people
- Expertise in object oriented programming
- Experience in Deploying and supporting core and advanced features
- Good exposure to SQL and/or Oracle/ MySQL,
- Candidate should be well verse with Agile processes
- Strong ability to work collaboratively across diverse groups of business and technical stakeholders is required.
- As a polyglot developer Ideally, you should have:
- 1.5+ years of development experience using any of technology java, scala, python or any similar exciting technologies.
- Hands-on experience in coding, and implementation of complex, custom-built applications Working knowledge of build tool like maven/sbt and code versioning systems like git/bitbucket/cvs/svn
- Familiarity with few databases, like MySQL, Oracle, PostgreSQL, SQL Server, NoSQL etc Great OO skills, including strong design patterns knowledge
- Good communication and ability to work in a consulting environment is essential
- Think through hard problems in a consultancy environment, and work with amazing people to make the solutions a reality Work in a dynamic, collaborative, non-hierarchical environment where your talent is valued over your job title or years of experience
- Build custom software using the latest technologies and tools Craft your own career path
- Providing solution to real problems in Bigdata world.
- RnD on using the latest tools,techniques and cloud services.
- Automating the manual-timetaking tasks.
- Hands-on coding, usually in a pair programming environment.
- Working in highly collaborative teams and building quality code
- Working in lots of different domains and client environments
- Understanding the business domain deeply
We are a team of technology agnostic, passionate people who aim to provide solution to real world Bigdata problems.
We are building solutions that will help our customers to do automatic migration of their RDBMS systems to latest BIGDATA platforms and tools such as Spark, Apex, Flink etc. For more information do visit our products webpage.
Experience: 2 to 4 years
Skills: SEO, SMO, SMM, Email Marketing, Content Marketing, Content Publishing, Google Ads & Analytics.
Product: B2B, SaaS platform for Utilities and Energy
Location: Pune
For more details please: http://www.bynry.com/">www.bynry.com
Role and Responsibilities
- Execute data mining projects, training and deploying models over a typical duration of 2 -12 months.
- The ideal candidate should be able to innovate, analyze the customer requirement, develop a solution in the time box of the project plan, execute and deploy the solution.
- Integrate the data mining projects embedded data mining applications in the FogHorn platform (on Docker or Android).
Core Qualifications
Candidates must meet ALL of the following qualifications:
- Have analyzed, trained and deployed at least three data mining models in the past. If the candidate did not directly deploy their own models, they will have worked with others who have put their models into production. The models should have been validated as robust over at least an initial time period.
- Three years of industry work experience, developing data mining models which were deployed and used.
- Programming experience in Python is core using data mining related libraries like Scikit-Learn. Other relevant Python mining libraries include NumPy, SciPy and Pandas.
- Data mining algorithm experience in at least 3 algorithms across: prediction (statistical regression, neural nets, deep learning, decision trees, SVM, ensembles), clustering (k-means, DBSCAN or other) or Bayesian networks
Bonus Qualifications
Any of the following extra qualifications will make a candidate more competitive:
- Soft Skills
- Sets expectations, develops project plans and meets expectations.
- Experience adapting technical dialogue to the right level for the audience (i.e. executives) or specific jargon for a given vertical market and job function.
- Technical skills
- Commonly, candidates have a MS or Ph.D. in Computer Science, Math, Statistics or an engineering technical discipline. BS candidates with experience are considered.
- Have managed past models in production over their full life cycle until model replacement is needed. Have developed automated model refreshing on newer data. Have developed frameworks for model automation as a prototype for product.
- Training or experience in Deep Learning, such as TensorFlow, Keras, convolutional neural networks (CNN) or Long Short Term Memory (LSTM) neural network architectures. If you don’t have deep learning experience, we will train you on the job.
- Shrinking deep learning models, optimizing to speed up execution time of scoring or inference.
- OpenCV or other image processing tools or libraries
- Cloud computing: Google Cloud, Amazon AWS or Microsoft Azure. We have integration with Google Cloud and are working on other integrations.
- Decision trees like XGBoost or Random Forests is helpful.
- Complex Event Processing (CEP) or other streaming data as a data source for data mining analysis
- Time series algorithms from ARIMA to LSTM to Digital Signal Processing (DSP).
- Bayesian Networks (BN), a.k.a. Bayesian Belief Networks (BBN) or Graphical Belief Networks (GBN)
- Experience with PMML is of interest (see www.DMG.org).
- Vertical experience in Industrial Internet of Things (IoT) applications:
- Energy: Oil and Gas, Wind Turbines
- Manufacturing: Motors, chemical processes, tools, automotive
- Smart Cities: Elevators, cameras on population or cars, power grid
- Transportation: Cars, truck fleets, trains
About FogHorn Systems
FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT application solutions. FogHorn’s Lightning software platform brings the power of advanced analytics and machine learning to the on-premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as Smart Grid, Smart City, Smart Building and connected vehicle applications.
Press: https://www.foghorn.io/press-room/">https://www.foghorn.io/press-room/
Awards: https://www.foghorn.io/awards-and-recognition/">https://www.foghorn.io/awards-and-recognition/
- 2019 Edge Computing Company of the Year – Compass Intelligence
- 2019 Internet of Things 50: 10 Coolest Industrial IoT Companies – CRN
- 2018 IoT Planforms Leadership Award & Edge Computing Excellence – IoT Evolution World Magazine
- 2018 10 Hot IoT Startups to Watch – Network World. (Gartner estimated 20 billion connected things in use worldwide by 2020)
- 2018 Winner in Artificial Intelligence and Machine Learning – Globe Awards
- 2018 Ten Edge Computing Vendors to Watch – ZDNet & 451 Research
- 2018 The 10 Most Innovative AI Solution Providers – Insights Success
- 2018 The AI 100 – CB Insights
- 2017 Cool Vendor in IoT Edge Computing – Gartner
- 2017 20 Most Promising AI Service Providers – CIO Review
Our Series A round was for $15 million. Our Series B round was for $30 million October 2017. Investors include: Saudi Aramco Energy Ventures, Intel Capital, GE, Dell, Bosch, Honeywell and The Hive.
About the Data Science Solutions team
In 2018, our Data Science Solutions team grew from 4 to 9. We are growing again from 11. We work on revenue generating projects for clients, such as predictive maintenance, time to failure, manufacturing defects. About half of our projects have been related to vision recognition or deep learning. We are not only working on consulting projects but developing vertical solution applications that run on our Lightning platform, with embedded data mining.
Our data scientists like our team because:
- We care about “best practices”
- Have a direct impact on the company’s revenue
- Give or receive mentoring as part of the collaborative process
- Questions and challenging the status quo with data is safe
- Intellectual curiosity balanced with humility
- Present papers or projects in our “Thought Leadership” meeting series, to support continuous learning