
Role:
- Developing a good understanding of the solutions which Company delivers, and how these link to Company’s overall strategy.
- Making suggestions towards shaping the strategy for a feature and engineering design.
- Managing own workload and usually delivering unsupervised. Accountable for their own workstream or the work of a small team.
- Understanding Engineering priorities and is able to focus on these, helping others to remain focussed too
- Acting as the Lead Engineer on a project. Helps ensure others follow Company processes, such as release and version control.
- An active member of the team, through useful contributions to projects and in team meetings.
- Supervising others. Deputising for a Lead and/or support them with tasks. Mentoring new joiners/interns and Masters students. Sharing knowledge and learnings with the team.
Requirements:
- Acquired strong proven professional programming experience.
- Strong command of Algorithms, Data structures, Design patterns, and Product Architectural Design.
- Good understanding of DevOps, Cloud technologies, CI/CD, Serverless and Docker, preferable AWS
- Proven track record and expert in one of the field - DevOps/Frontend/Backend
- Excellent coding and debugging skills in any language with command on any one programming paradigm, preferred Javascript/Python/Go
- Experience with at least one of the Database systems - RDBMS and NoSQL
- Ability to document requirements and specifications.
- A naturally inquisitive and problem-solving mindset.
- Strong experience in using AGILE or SCRUM techniques to build quality software.
- Advantage: experience in React js, AWS, Nodejs, Golang, Apache Spark, ETL tool, data integration system, certification in AWS, worked in a Product company and involved in making it from scratch, Good communication skills, open-source contributions, proven competitive coding pro

Similar jobs
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
Multiplier enables companies to employ anyone, anywhere in a few clicks. Our
SaaS platform combines the multi-local complexities of hiring & paying employees
anywhere in the world, and automates everything. We are passionate about
creating a world where people can get a job they love, without having to leave the
people they love.
We are an early stage start up with a "Day one" attitude and we are building a
team that will make Multiplier the market leader in this space. Every day is an
exciting one at Multiplier right now because we are figuring out a real problem in
the market and building a first-of-its-kind product around it. We are looking for
smart and talented people who will add on to our collective energy and share the
same excitement in making Multiplier a big deal. We are headquartered in
Singapore, but our team is remote.
What will I be doing? 👩💻👨💻
Owning and managing our cloud infrastructure on AWS.
Working as part of product development from inception to launch, and own
deployment pipelines through to site reliability.
Ensuring a high availability production site with proper alerting, monitoring and
security in place.
Creating an efficient environment for product development teams to build, test
and deploy features quickly by providing multiple environments for testing and
staging.
Use infrastructure as code and the best of methods and tools in DevOps to
innovate and keep improving.
Create an automation culture and add automation to wherever it is needed..
DevOps Engineer Remote) 2
What do I need? 🤓
4 years of industry experience in a similar DevOps role, preferably as part of
a SaaS
product team. You can demonstrate the significant impact your work has had
on the product and/or the team.
Deep knowledge in AWS and the services available. 2 years of experience in
building complex architecture on cloud infrastructure.
Exceptional understanding of containerisation technologies and Docker. Have
had hands on experience with Kubernetes, AWS ECS and AWS EKS.
Experience with Terraform or any other infrastructure as code solutions.
Able to comfortably use at least one high level programming languages such
as Java, Javascript or Python. Hands on experiences of scripting in bash,
groovy and others.
Good understanding of security in web technologies and cloud infrastructure.
Work with and solve problems of very complex nature and enjoy doing it.
Willingness to quickly learn and use new technologies or frameworks.
Clear and responsive communication.
Key Responsibilities:
- Work with the development team to plan, execute and monitor deployments
- Capacity planning for product deployments
- Adopt best practices for deployment and monitoring systems
- Ensure the SLAs for performance, up time are met
- Constantly monitor systems, suggest changes to improve performance and decrease costs.
- Ensure the highest standards of security
Key Competencies (Functional):
- Proficiency in coding in atleast one scripting language - bash, Python, etc
- Has personally managed a fleet of servers (> 15)
- Understand different environments production, deployment and staging
- Worked in micro service / Service oriented architecture systems
- Has worked with automated deployment systems – Ansible / Chef / Puppet.
- Can write MySQL queries
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month
-
Pixuate is a deep-tech AI start-up enabling businesses make smarter decisions with our edge-based video analytics platform and offer innovative solutions across traffic management, industrial digital transformation, and smart surveillance. We aim to serve enterprises globally as a preferred partner for digitization of visual information.
Job Description
We at Pixuate are looking for highly motivated and talented Senior DevOps Engineers to support building the next generation, innovative, deep-tech AI based products. If you are someone who has a passion for building a great software, has analytical mindset and enjoys solving complex problems, thrives in a challenging environment, self-driven, constantly exploring & learning new technologies, have ability to succeed on one’s own merits and fast-track your career growth we would love to talk!
What do we expect from this role?
- This role’s key area of focus is to co-ordinate and manage the product from development through deployment, working with rest of the engineering team to ensure smooth functioning.
- Work closely with the Head of Engineering in building out the infrastructure required to deploy, monitor and scale the services and systems.
- Act as the technical expert, innovator, and strategic thought leader within the Cloud Native Development, DevOps and CI/CD pipeline technology engineering discipline.
- Should be able to understand how technology works and how various structures fall in place, with a high-level understanding of working with various operating systems and their implications.
- Troubleshoots basic software or DevOps stack issues
You would be great at this job, if you have below mentioned competencies
- Tech /M.Tech/MCA/ BSc / MSc/ BCA preferably in Computer Science
- 5+ years of relevant work experience
- https://www.edureka.co/blog/devops-skills#knowledge">Knowledge on Various DevOps Tools and Technologies
- Should have worked on tools like Docker, Kubernetes, Ansible in a production environment for data intensive systems.
- Experience in developing https://www.edureka.co/blog/continuous-delivery/">Continuous Integration/ Continuous Delivery pipelines (CI/ CD) preferably using Jenkins, scripting (Shell / Python) and https://www.edureka.co/blog/what-is-git/">Git and Git workflows
- Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
- Experience with the design and implementation of big data backup/recovery solutions.
- Strong Linux fundamentals and scripting; experience as Linux Admin is good to have.
- Working knowledge in Python is a plus
- Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture is a plus
- Strong interpersonal and communication skills
- Proven ability to complete projects according to outlined scope and timeline
- Willingness to travel within India and internationally whenever required
- Demonstrated leadership qualities in past roles
More about Pixuate:
Pixuate, owned by Cocoslabs Innovative Solutions Pvt. Ltd., is a leading AI startup building the most advanced Edge-based video analytics products. We are recognized for our cutting-edge R&D in deep learning, computer vision and AI and we are solving some of the most challenging problems faced by enterprises. Pixuate’s plug-and-play platform revolutionizes monitoring, compliance to safety, and efficiency improvement for Industries, Banks & Enterprises by providing actionable real-time insights leveraging CCTV cameras.
We have enabled our customers such as Hindustan Unilever, Godrej, Secuira, L&T, Bigbasket, Microlabs, Karnataka Bank etc and rapidly expanding our business to cater to the needs of Manufacturing & Logistics, Oil and Gas sector.
Rewards & Recognitions:
- Winner of Elevate by Startup Karnataka (https://pixuate.ai/thermal-analytics/">https://pixuate.ai/thermal-analytics/).
- Winner of Manufacturing Innovation Challenge in the 2nd edition of Fusion 4.0’s MIC2020 organized by the NASSCOM Centre of Excellence in IoT & AI in 2021
- Winner of SASACT program organized by MEITY in 2021
Why join us?
You will get an opportunity to work with the founders and be part of 0 to 1 journey& get coached and guided. You will also get an opportunity to excel your skills by being innovative and contributing to the area of your personal interest. Our culture encourages innovation, freedom and rewards high performers with faster growth and recognition.
Where to find us?
Website: http://pixuate.com/">http://pixuate.com/
Linked in: https://www.linkedin.com/company/pixuate-ai
Work from Office – BengaluruPlace of Work:
We are looking for a tech enthusiast to work in challenging environment, we are looking for a person who is self driven, proactive and has good experience in Azure, DevOps, Asp.Net etc. share your resume today if this interests you.
Job Location: Pune
About us:
JetSynthesys is a leading gaming and entertainment company with a wide portfolio of world class products, platforms, and services. The company has a robust foothold in the cricket community globally with its exclusive JV with Sachin Tendulkar for the popular Sachin Saga game and a 100% ownership of Nautilus Mobile - the developer of India’s largest cricket simulation game Real Cricket, becoming the #1 cricket gaming franchise in the world. Standing atop in the charts of organizations fueling Indian esports gaming industry, Jetsysthesys was the earliest entrant in the e-sports industry with a founding 50% stake in India’s largest esports company, Nodwin Gaming that is recently funded by popular South Korean gaming firm Krafton.
Recently, the company has developed WWE Racing Showdown, a high-octane vehicular combat game borne out of a strategic partnership with WWE. Adding to the list is the newly launched board game - Ludo Zenith, a completely reimagined ludo experience for gamers, built in partnership with Square Enix - a Japanese gaming giant.
JetSynthesys Pvt. Ltd. is proud to be backed by Mr. Adar Poonawalla - Indian business tycoon and CEO of Serum Institute of India, Mr. Kris Gopalakrishnan – Co-founder of Infosys and the family offices of Jetline Group of Companies. JetSynthesys’ partnerships with large gaming companies in the US, Europe and Japan give it an opportunity to build great products not only for India but also for the world.
Responsibilities
- As a Security & Azure DevOps engineer technical specialist you will be responsible for advising and assisting in the architecture, design and implementation of secure infrastructure solutions
- Should be capable of technical deep dives into infrastructure, databases, and application, specifically in operating, and supporting high-performance, highly available services and infrastructure
· Deep understanding of cloud computing technologies across Windows, with demonstrated hands-on experience on the following domains:
· Experience in building, deploying and monitoring Azure services with strong IaaS and PaaS services (Redis Cache, Service Bus, Event Hub, Cloud Service etc.)
· Understanding of API endpoint management
· Able to monitor, maintain serverless architect using function /web apps
· Logs analysis capabilities using elastic search and Kibana dashboard
· Azure Core Platform: Compute, Storage, Networking
· Data Platform: SQL, Cosmo DB, MongoDB and JQL query
· Identity and Authentication: SSO Federation, ADAzure AD etc
· Experience with Azure Storage, Backup and Express Route
· Hands-on experience in ARM templates
· Ability to write PowerShell & Python scripts to automate IT Operations
· Working Knowledge of Azure OMS and Configuration of OMS Dashboards is desired
· VSTS Deployments
· You will help assist in stabilizing developed solutions by understanding the relevant application development, infrastructure and operations implications of the developed solution.
· Use of DevOps tools to deliver and operate end-user services a plus (e.g., Chef, New Relic, Puppet, etc.)
· Able to deploy and re-create of resources
· Building terraforms for cloud infrastructure
· Having ITIL/ITASM standards as best practise
· PEN testing and OWASP security testing will be addon bonus
· Load and performance testing using Jmeter
· Capacity review on daily basis
· Handing repeated issues and solving them using ansible automation
· Jenkin pipelines for CI/CD
· Code review platform using sonarqube
· Cost analysis on regular basis to keep the system and resources optimum
· Experience on ASP.Net, C#, .Net programming is must.
Qualifications:
Minimum graduate (Preferred Stream: IT/Technical)
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience









