We are looking for a tech enthusiast to work in challenging environment, we are looking for a person who is self driven, proactive and has good experience in Azure, DevOps, Asp.Net etc. share your resume today if this interests you.
Job Location: Pune
About us:
JetSynthesys is a leading gaming and entertainment company with a wide portfolio of world class products, platforms, and services. The company has a robust foothold in the cricket community globally with its exclusive JV with Sachin Tendulkar for the popular Sachin Saga game and a 100% ownership of Nautilus Mobile - the developer of India’s largest cricket simulation game Real Cricket, becoming the #1 cricket gaming franchise in the world. Standing atop in the charts of organizations fueling Indian esports gaming industry, Jetsysthesys was the earliest entrant in the e-sports industry with a founding 50% stake in India’s largest esports company, Nodwin Gaming that is recently funded by popular South Korean gaming firm Krafton.
Recently, the company has developed WWE Racing Showdown, a high-octane vehicular combat game borne out of a strategic partnership with WWE. Adding to the list is the newly launched board game - Ludo Zenith, a completely reimagined ludo experience for gamers, built in partnership with Square Enix - a Japanese gaming giant.
JetSynthesys Pvt. Ltd. is proud to be backed by Mr. Adar Poonawalla - Indian business tycoon and CEO of Serum Institute of India, Mr. Kris Gopalakrishnan – Co-founder of Infosys and the family offices of Jetline Group of Companies. JetSynthesys’ partnerships with large gaming companies in the US, Europe and Japan give it an opportunity to build great products not only for India but also for the world.
Responsibilities
- As a Security & Azure DevOps engineer technical specialist you will be responsible for advising and assisting in the architecture, design and implementation of secure infrastructure solutions
- Should be capable of technical deep dives into infrastructure, databases, and application, specifically in operating, and supporting high-performance, highly available services and infrastructure
· Deep understanding of cloud computing technologies across Windows, with demonstrated hands-on experience on the following domains:
· Experience in building, deploying and monitoring Azure services with strong IaaS and PaaS services (Redis Cache, Service Bus, Event Hub, Cloud Service etc.)
· Understanding of API endpoint management
· Able to monitor, maintain serverless architect using function /web apps
· Logs analysis capabilities using elastic search and Kibana dashboard
· Azure Core Platform: Compute, Storage, Networking
· Data Platform: SQL, Cosmo DB, MongoDB and JQL query
· Identity and Authentication: SSO Federation, ADAzure AD etc
· Experience with Azure Storage, Backup and Express Route
· Hands-on experience in ARM templates
· Ability to write PowerShell & Python scripts to automate IT Operations
· Working Knowledge of Azure OMS and Configuration of OMS Dashboards is desired
· VSTS Deployments
· You will help assist in stabilizing developed solutions by understanding the relevant application development, infrastructure and operations implications of the developed solution.
· Use of DevOps tools to deliver and operate end-user services a plus (e.g., Chef, New Relic, Puppet, etc.)
· Able to deploy and re-create of resources
· Building terraforms for cloud infrastructure
· Having ITIL/ITASM standards as best practise
· PEN testing and OWASP security testing will be addon bonus
· Load and performance testing using Jmeter
· Capacity review on daily basis
· Handing repeated issues and solving them using ansible automation
· Jenkin pipelines for CI/CD
· Code review platform using sonarqube
· Cost analysis on regular basis to keep the system and resources optimum
· Experience on ASP.Net, C#, .Net programming is must.
Qualifications:
Minimum graduate (Preferred Stream: IT/Technical)

About JetSynthesys Pvt. Ltd.
About
Connect with the team
Similar jobs
ketteQ is a supply chain planning and automation platform. We are looking for an experienced AWS Devops Engineer to help manage AWS infrastructure and automation. This job comes with a attractive compensation package, work-from-home and flex-time benefits. You will get to work on projects for large global brands with a highly experienced team based in US and India. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you. Candidates must meet the following requirements:
Duties & Responsibilities
- Deployment, automation, management, and maintenance of AWS cloud-based production system
- Build a deployment pipeline for AWS and Salesforce
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as cloud formation template
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Requirements
- At least 5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, Fargate, S3, Cloud Formation)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Solid foundation of networking and Linux administration
- Experience with Docker, GitHub, Jenkins, Cloud Formation and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Database experience to help with monitoring and performance; PostgreSql experience preferred
- AWS certification preferred
Education
- Bachelors in Engineering or related field
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a Cloud Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- End to End Complete RHOSP Deployment (Under and Overcloud) considered as a NFVi Deployment
- Installing Red Hat’s OpenStack technology using the OSP-director
- Deploying a Red Hat OpenStack Platform based on Red Hat's reference architecture
- Deploying managed hosts with required OpenStack parameters
- Deploying Three (3) Node Highly available (HA) Controller Hosts using Pacemaker and HAP Roxy
- Deploying all the supplied compute hosts that will be hosting multiple VNFs (SRIOV & DPDK)
- Implementing CEPH
- Integrate Software-defined storage (CEPH) with RHOSP, Red Hat OpenStack Platform Operation Tools as
per Industry standard & best practices
- Detailed Network configuration and implementation using Neutron networking - (VXLAN) network type
and Modular Layer 2 (ML2) Open Switch plugin
- Integrating Monitoring Solution with RHOSP
- Design and deployment of common Alarm and Performance Management solution
- Red Hat OpenStack management & monitoring
- VM Alarm and Performance management
- Cloud Management Platform will be configured for day-to-day operational tools to measure uponCPU/Mem/Network utilization etc. from VM level
- Baseline Security Standard) & VA (Vulnerability Assessment) for RHOSP.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimize hardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits
discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability
status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other
characteristic protected by applicable central, state or local laws.

Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.

Requirements and Qualifications
- Bachelor’s degree in Computer Science Engineering or in a related field
- 4+ years of experience
- Excellent analytical and problem-solving skills
- Strong knowledge of Linux systems and internals
- Programming experience in Python/Shell scripting
- Strong AWS skills with knowledge of EC2, VPC, S3, RDS, Cloudfront, Route53, etc
- Experience in containerization (Docker) and container orchestration (Kubernetes)
- Experience in DevOps & CI/CD tools such as Git, Jenkins, Terraform, Helm
- Experience with SQL & NoSQL databases such as MySql, MongoDB, and ElasticSearch
- Debugging and troubleshooting skills using tools such as strace, tcpdump, etc
- Good understanding of networking protocol and security concerns (VPN, VPC, IG, NAT, AZ, Subnet)
- Experience with monitoring and data analysis tools such as Prometheus, EFK, etc
- Good communication & collaboration skills and attention to details
- Participation in rotating on-call duties
- As a DevOps engineer, you will be responsible for
- Automated provisioning of infrastructure in AWS/Azure/OpenStack environments.
- Creation of CI/CD pipelines to ensure smooth delivery of projects.
- Proactive Monitoring of overall infrastructure (Logs/Resources etc)
- Deployment of application to various cloud environments.
- Should be able to lead/guide a team towards achieving goals and meeting the milestones defined.
- Practice and implement best practices in every aspect of project deliverable.
- Keeping yourself up to date with new frameworks and tools and enabling the team to use them.
Skills Required
- Experience in Automation of CI/CD processes using tools such as GIT, Gerrit, Jenkins, CircleCI, Azure Pipeline, Gitlab
- Experience in working with AWS and Azure platforms and Cloud-Native automation tools such as AWS cloud formation and Azure Resource Manager.
- Experience in monitoring solutions such as ELK Stack, Splunk, Nagios, Zabbix, Prometheus
- Web Server/Application Server deployments and administration.
- Good Communication, Team Handling, Problem-solving, Work Ethic, and Creativity.
- Work experience of at least 1 year in the following are mandatory.
If you do not have the relevant experience, please do not apply.
- Any cloud provider (AWS, GCP, Azure, OpenStack)
- Any of the configuration management tools (Ansible, Chef, Puppet, Terraform, Powershell DSC)
- Scripting languages (PHP, Python, Shell, Bash, etc.?
- Docker or Kubernetes
- Troubleshoot and debug infrastructure Network and operating system issues.
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.
Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments
We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications
Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D








