11+ Hotel Management Jobs in Pune | Hotel Management Job openings in Pune
Apply to 11+ Hotel Management Jobs in Pune on CutShort.io. Explore the latest Hotel Management Job opportunities across top companies like Google, Amazon & Adobe.

Job Description
We are seeking Senior Software Engineers who can architect and ship fullstack digital products at a high bar — using AI-assisted development tools to move faster without cutting corners. This role spans platform, product, and go-to-market — you'll own backend systems, shape frontend experiences, make infrastructure decisions, and set a higher engineering standard for the team around you. The ideal candidate has designed systems they can defend, shipped products at scale, and knows what it takes to get there.
Requirements
Product & Client Ownership Be the day-to-day technical owner on engagements — understand the client's business deeply, shape the product roadmap, and translate ambiguous problems into clear engineering direction. Show up to demos and reviews with the confidence to defend tradeoffs and flag risks early.
Architecture & Judgment Make architectural decisions that hold up at scale. AI can generate code — your job is to decide what gets built, how it fits together, and when to push back. Evaluate tradeoffs, review TRDs, and set the technical direction the rest of the team executes against.
Fullstack Execution Ship backend services, APIs, database schemas, and user-facing features end-to-end. Use AI-assisted tools (Cursor, Claude Code, Antigravity) to move at the speed of a small team without cutting corners on quality.
Platform & Reliability Own cloud infrastructure, CI/CD, and production systems. Define how the team monitors, debugs, and responds to incidents. If something breaks at 2am, you've already thought about it.
AI & Automation Drive AI adoption in products — LLM APIs, RAG pipelines, agentic workflows. Push for automation across client and internal workflows. Know what these tools are good at and, more importantly, where they fail.
Raising the Bar Be the judgment layer for junior engineers who are moving fast with AI tools. Review code for what matters — not style, but correctness, scalability, and whether the author actually understood what they shipped. Run knowledge-sharing sessions. Onboard people well.
Must Haves
3–5 years of professional engineering experience with production systems you've owned end-to-end.
Active user of AI IDEs (Cursor, Claude Code, Antigravity, or similar).
Demonstrated system design ability — you've made architectural decisions and can evaluate trade-offs.
Good exposure to cloud platforms and deployments.
Familiarity with observability and monitoring tools — you can track down issues and identify bottlenecks.
Deep backend proficiency: API design, databases, microservices, distributed systems, event-driven architecture, and message brokers.
Worked with at least two of REST, GraphQL, or gRPC in production.
Eye for design — you care about the experiences you build for users.
High rate of learning — you figure things out fast.
Nice to Have
Cloud architecture experience (AWS, GCP, Azure) with containerisation and orchestration.
Familiarity with AI/ML: prompt engineering, embeddings, agent frameworks (LangChain, CrewAI, LangGraph).
Experience with automation and workflow tools (n8n, Make, Zapier).
Benefits
Mentorship: Work next to some of the best engineers and designers — and be one for others.
Freedom: An environment where you get to practice your craft. No micromanagement.
Comprehensive healthcare: Healthcare for you and your family.
Growth: A tailor-made program to help you achieve your career goals.
A voice that is heard: We don't claim to know the best way of doing things. We like to listen to ideas from our team.
Job Title: Site Reliability Engineer (SRE) / Application Support Engineer
Experience: 3–7 Years
Location: Bangalore / Mumbai / Pune
About the Role
The successful candidate will join the S&C Site Reliability Engineering (SRE) Team, responsible for providing Tier 2/3 support to S&C business applications and environments. This role requires close collaboration with client-facing teams (Client Services, Product, and Research) as well as Infrastructure, Technology, and Application Development teams to maintain and support production and non-production environments.
Key Responsibilities
- Provide Tier 2/3 product technical support and issue resolution.
- Develop and maintain software tools to improve operations and support efficiency.
- Manage system and software configurations; troubleshoot environment-related issues.
- Identify opportunities to optimize system performance through configuration improvements or development suggestions.
- Plan, document, and deploy software applications across Unix/Linux, Azure, and GCP environments.
- Collaborate with Development and QA teams throughout the software release lifecycle.
- Analyze and improve release and deployment processes to drive automation and efficiency.
- Coordinate with infrastructure teams for maintenance, planned downtimes, and resource management across production and non-production environments.
- Participate in on-call support (minimum one week per month) for off-hour emergencies and maintenance activities.
Required Skills & Qualifications
- Education:
- Bachelor’s degree in Computer Science, Engineering, or a related field (BE/MCA).
- Master’s degree is a plus.
- Experience:
- 3–7 years in Production Support, Application Management, or Application Development (support/maintenance).
- Technical Skills:
- Strong Unix/Linux administration skills.
- Excellent scripting skills — Shell, Python, Batch (mandatory).
- Database expertise — Oracle (must have).
- Understanding of Software Development Life Cycle (SDLC).
- PowerShell knowledge is a plus.
- Experience in Java or Ruby development is desirable.
- Exposure to cloud platforms (GCP, Azure, or AWS) is an added advantage.
- Soft Skills:
- Excellent problem-solving and troubleshooting abilities.
- Strong collaboration and communication skills.
- Ability to work in a fast-paced, cross-functional environment.
-
Preferred Education & Experience:
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪ Object Oriented Modeling, Design, & Programming
▪ Microservices Architecture, API Design, & Implementation
▪ Relational, Document, & Graph Data Modeling, Design, & Implementation -
Well-versed in and hands-on demonstrable experience with:
▪ Stream & Batch Big Data Pipeline Processing
▪ Distributed Cloud Native Computing
▪ Serverless Computing & Cloud Functions -
5+ years of hands-on development experience in Java programming.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Spring Boot, Apache Camel, Akka, etc.;
extra points if you can demonstrate your knowledge with working examples.
2+ years of hands-on development experience in one or more Relational and NoSQL datastores such as Amazon S3, Amazon DocumentDB, Amazon Elasticsearch Service, Amazon Aurora, AWS DynamoDB, Amazon Athena, etc. -
2+ years of hands-on development experience in one or more technologies such as Amazon Simple Queue Service, Amazon Kinesis, Apache Kafka, AWS Lambda, AWS Batch, AWS Glue, AWS Step Functions, Amazon API Gateway, etc.
-
2+ years of hands-on development experience in one or more technologies such as AWS Developer Tools, AWS Management & Governance, AWS Networking and Content Delivery, AWS Security, Identity, and Compliance, etc.
-
Well-versed in Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
-
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed with Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Experience : 5+Years
-
Job Location : Remote/Pune
Preferred Education & Experience:
•Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience.
Well-versed in and 5+ years of hands-on demonstrable experience with:
▪Data Analysis & Data Modeling
Database Design & Implementation
Database Performance Tuning & Optimization
▪PL/pgSQL & SQL
•5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL
Server/Oracle).
•5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures,
functions, triggers, and views.
Hands-on experience with demonstrable working experience in Database Design Principles, SQL
Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation
levels.
Hands-on experience with demonstrable working experience in Database Read & Write
Performance Tuning & Optimization.
•Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented
Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts
are added values
•Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
Hands-on development experience in one or more NoSQL datastores such as Cassandra, HBase,
MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus
Job Location : Pune/Remote
Work Timings : 2.30 pm-11:30 pm
Joining Period : Immediate-20 day
network automation tools Ansible terraform D wan cloud networking (Aws,AzurE,Gcp)
desinging network security solutions firewalls VPN IPS IDS
Job Description:
· Mechanical Engineer with 7+ years of relevant experience
· Strong Knowledge of injection Molding simulation (Cool+Fill+Pack+Warp) using Autodesk Moldflow.
· Strong Knowledge on selection of gate location, 2k and insert Molding Analysis.
· Working knowledge on Plastic Injection Mold design.
· Strong knowledge in plastic material selection and able to suggest alternate material.
· Experience in Good knowledge of injection Molding process.
· Experience in solving injection Molding issues (weld lines, sink marks, warpage, etc).
· Able to find optimized process parameters for Injection Molding process Engineer.
· Strong analytical skills and results interpretation skills.
· Working understanding of CAD software like SolidWorks, CATIA etc.
· Working experience in Product design
· Good exposure to manufacturing process
· Good Knowledge of DFMEA, DFM and DFA
Job Title: Sr. Dot Net Full Stack Developer
Work Mode: Work from Office (Mon-Fri)
Experience: 4 to 10 years
Skills / Technical Experience Required
Must have competencies: .Net Core / .Net Framework, Angular / React, C#, SQL, REST API, Entity Framework Core
Additional competencies: HTM5, CSS3, Azure, Test Driven Development, No SQL DBs
Good to have competencies: TFS, GitHub, Agile methodologies, Figma
Keys Roles / Responsibilities / Abilities
• Understanding the requirements for developing large modules and standalone applications in the project and adhering to the delivery schedule.
• The ability to Identify improvements to the existing application code and designs, increasing flexibility and reducing future effort and the ability to “pitch” these idea to team leaders and project sponsors as required.
• Work closely with the product owner or project sponsors to discuss the requirements, estimate development efforts and gain their acceptance of the solution. This will include working directly with the client where required.
• Mentoring of Developers and non-chargeable juniors in the team in achieving technical excellence in the delivery of the project.
• The ability to be involved in build and deployment activities and address build issues.
• Clearly explain and discuss technical points with both technical and non-technical staff.
Your benefits on joining Xperate
• Salary - Higher salary than industry standard.
• Annual Leave – 20 days excluding 10 public holidays. Medical/Sick leave is also provided.
• Life Insurance – 5L of default cover for you and your family.
• Accidental Cover – 20L of cover for each employee.
• Bonus Scheme – 100% based on company & individual performance.
• Latest Technology - Exposure to the latest technologies.
• Employee Development - Committed to the development, growth & well-being of our people
Regards,
Dipshikha Kulshrestha
- Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
- Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
- Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
- Build data pipelines that clean, transform, and aggregate data from disparate sources
- Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
- Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
- Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
- Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.
Job Qualifications:
- Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
- 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
- Technical expertise with data models, data mining.
- Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
- Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
- Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
- Hands-on knowledge in SQL and No-SQL database design.
- Having knowledge in CI/CD for the building and hosting of the solutions.
- Having AWS certification is an added advantage.
- Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
- A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
2. Extensive expertise in the below in AWS Development.
3. Amazon Dynamo Db, Amazon RDS , Amazon APIs. AWS Elastic Beanstalk, and AWS Cloud Formation.
4. Lambda, Kinesis. CodeCommit ,CodePipeline.
5. Leveraging AWS SDKs to interact with AWS services from the application.
6. Writing code that optimizes performance of AWS services used by the application.
7. Developing with Restful API interfaces.
8. Code-level application security (IAM roles, credentials, encryption, etc.).
9. Programming Language Python or .NET. Programming with AWS APIs.
10. General troubleshooting and debugging.
Github
Selenium
Mobile testing
Website testing
Send your resume to support itgyani.com if you are interested in applying for this position.


