11+ PACE Jobs in Hyderabad | PACE Job openings in Hyderabad
Apply to 11+ PACE Jobs in Hyderabad on CutShort.io. Explore the latest PACE Job opportunities across top companies like Google, Amazon & Adobe.
Brief write up on how you could be a great fit for the role :
What your days and weeks will include.
- Agile scrum meetings, as appropriate, to track and drive individual and team accountability.
- Ad hoc collaboration sessions to define and adopt standards and best practices.
- A weekly ½ hour team huddle to discuss initiatives across the entire team.
- A weekly ½ hour team training session focused on comprehension of our team delivery model
- Conduct technical troubleshooting, maintenance, and operational support for production code
- Code! Test! Code! Test!
The skills that make you a great fit.
- Minimum of 5 years to 10 Years of experience as Eagle Investment Accounting (BNY Mellon product) as a Domain expert/QA Tester.
- Expert level of knowledge in ‘Capital Markets’ or ‘Investment of financial services.
- Must be self-aware, resilient, possess strong communications skills, and can lead.
- Strong experience release testing in an agile model
- Strong experience preparing test plans, test cases, and test summary reports.
- Strong experience in execution of smoke and regression test cases
- Strong experience utilizing and managing large and complex data sets.
- Experience with SQL Server and Oracle desired
- Good to Have knowledge on DevOps delivery model including roles and technologies desired
About Infinity Learn
Infinity Learn, backed by the prestigious Sri Chaitanya Group, is one of India’s fastest-growing EdTech companies. We aim to revolutionize learning through innovative, tech-driven, and result-oriented education solutions for students from Grades 6 to 12+.
Role Overview
We’re hiring Academic Counsellors (Inside Sales) who are passionate about guiding students and parents toward the right learning programs. You’ll play a crucial role in driving revenue growth and helping learners achieve their academic goals.
Key Responsibilities
- Connect with prospective customers via calls, emails, and virtual meetings.
- Understand students’ academic needs and recommend suitable Infinity Learn programs.
- Achieve and exceed monthly sales targets.
- Build and maintain strong relationships with students and parents.
- Handle objections confidently and close sales effectively.
Requirements
- Minimum 1 year of sales experience (EdTech or education background preferred).
- Strong communication and persuasive skills.
- Goal-oriented and self-driven attitude.
- Comfortable working in a fast-paced environment.
- Proficiency in CRM tools will be an added advantage.
Work Schedule
- Days: Tuesday to Sunday (Monday fixed off)
- Timings: 11:00 AM – 9:00 PM
Why Join Us?
- Be part of a dynamic and fast-growing EdTech brand backed by Sri Chaitanya.
- Work with an energetic and motivated sales team.
- Opportunity to make a real impact in transforming education across India.
Key Responsibilities
Reporting & Analytics Development
• Design, develop, and maintain complex reporting solutions serving mission-critical emergency
response operations
• Build interactive dashboards and data visualizations that transform large datasets into
actionable insights
• Develop advanced analytics features including trend analysis, predictive metrics, and
operational intelligence
• Create self-service reporting capabilities enabling stakeholders to access critical data
independently
• Implement real-time and scheduled reporting systems with appropriate caching and
optimization strategies
SQL & Database Architecture
• Write and optimize complex SQL queries involving multiple joins, subqueries, CTEs, window
functions, and aggregations
• Review existing database architectures and identify performance bottlenecks and optimization
opportunities
• Design and implement database schema enhancements to support evolving reporting
requirements
• Develop and maintain stored procedures, functions, and views optimized for reporting
workloads
• Perform comprehensive query performance analysis and implement tuning strategies across
PostgreSQL and AWS Aurora environments
• Create efficient indexing strategies and data access patterns for large-scale datasets
• Establish data modeling best practices for dimensional and transactional reporting needs
Performance Optimization
• Conduct systematic performance analysis of reporting queries processing millions of records
• Implement query optimization techniques including execution plan analysis, index tuning, and
query refactoring
• Design and implement data aggregation strategies, materialized views, and summary tables for
improved performance
• Monitor and optimize database performance metrics including query response times, resource
utilization, and concurrency
• Develop ETL processes and data pipelines optimized for reporting and analytics workloads
• Implement caching strategies and data archival policies to maintain optimal system
performance
Technical Collaboration
• Partner with Product and stakeholder teams to translate business reporting requirements into
technical solutions
• Conduct code reviews focused on SQL quality, performance, and best practices
• Provide technical guidance to development teams on reporting architecture and database
optimization
• Document database schemas, reporting architectures, and optimization strategies
• Collaborate with DevOps on database infrastructure, monitoring, and scaling initiatives
Required Qualifications
Experience & Background
• 8+ years of software engineering experience with significant focus on reporting, analytics, or
business intelligence
• 5+ years of hands-on experience writing complex SQL in production environments
• 3+ years working with large-scale databases (millions+ records) and optimizing query
performance
• Proven track record developing enterprise reporting solutions and analytics platforms
• Experience conducting database architecture reviews and implementing performance
improvements
Technical Expertise - SQL & Databases
• Expert-level SQL proficiency including complex joins, subqueries, CTEs, window functions, and
advanced aggregations
• Deep PostgreSQL expertise with production experience in query optimization and performance
tuning
• Strong understanding of database internals including execution plans, indexing strategies, and
query optimization
• Experience with AWS Aurora or other cloud-based PostgreSQL solutions
• Proficiency in stored procedures, triggers, functions, and database programming
• Advanced knowledge of database design principles, normalization, and dimensional modeling
• Experience with database performance monitoring and profiling tools
• Strong programming skills in PHP (Yii framework or similar MVC frameworks preferred)
• Solid experience with Vue.js or similar modern JavaScript frameworks for building reporting
interfaces
What You'll Work With
• Databases: PostgreSQL, AWS Aurora, multi-tenant architectures
• Backend: PHP (Yii framework), RESTful APIs
• Frontend: Vue.js (Vue 2/3), modern JavaScript, data visualization libraries
• Infrastructure: AWS (RDS, Aurora, S3, Lambda), Postgres
• Tools: Git/GitHub, JIRA, Agile development workflows
Work Location
Hyderabad, India - On-site position with flexible working arrangements to accommodate both India
and onsite teams
Greetings!!!
We are hiring for the position of "Starlims Developer" for one of the IT MNCs.
Exp: 5.5 - 12 yrs
Loc: PAN India
Skills: Starlims, Starlims developer, SQL
Job Description:
- Need only Starlims developer.
- Candidate should have Experience in SSL(Starlims Scripting Language).
- Candidate should have Experience in DB Design- Optimization- MS SQL, and web services.
-
Responsibilities
- • Collaborate with the core team building a state-of-the-art blockchain based SaaS.
- • Building front-end web applications using latest technologies.
- • Suggest and ideate high- and low-level design and architecture requirements for implementation of features.
- • Apply comprehensive knowledge and a thorough understanding of concepts, principles, and technical capabilities to perform varied tasks and projects.
- • Backend RESTful API Design and server-side development for web and mobile apps.
- • Develop and design database schemas on MongoDB.
- • Perform code reviews and unit tests to ensure code consistency and maintainability.
- • Setup alerts, monitoring & metrics, provision and manage resources on AWS Cloud.
Minimum Qualifications
- • Passion to solve complex problems of agriculture in a dynamic start-up environment
- • Flexible, curious, exploratory, and can-do attitude
- • Great sense of ownership and ability to work with very limited supervision
- • Bachelor's degree in Computer Science, related technical field, or equivalent practical experience.
- • 2-5 years of hands-on coding expertise in software engineering.
- • Experience with JavaScript, HTML, CSS.
- • Ability to speak English fluently.
Preferred Qualifications
- • Strong programming experience of 2+ years with Node.js, Express.js, MongoDB and Angular stack.
- • Strong problem solving/troubleshooting skills.
- • Experience with software design and architecture for building highly efficient and scalable systems.
- • Experience with MongoDB Atlas for database hosting and storage.
- • Experience with SDLC in an Agile work environment using tools like Jira and GitHub.
- • Experience with AWS Services like ECS, EKS, EC2, S3, Lambda, Load Balancing, Route 53, API Gateway along with Infrastructure provisioning and optimization.
- • Knowledge and experience of NoSQL Databases is a must.
- • Experience with DevOps, CI/CD pipelines on Jenkins, GitHub Actions, etc.
- • Experience with containerization environments like Docker & Kubernetes.
- • Experience with Material Design is an advantage.
- • Experience with TDD is an advantage.
- • Experience or familiarity of blockchain protocols like Hyperledger Fabric a big advantage.
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month
Full Time position
Work Location:Hyderabad
Experience level: 3 to 5 years
Mandatory Skills:Python, Django/Flask and Rest API
Package:Upto 20 LPA
Job Description:
--Experience in web application development using Python, Django/Flask.
--Proficient in developing REST API's, Integration and familiar with JSON formatted data.
--Good to have knowledge in front-end frameworks like Vue.js/Angular/React.js
--Writing high quality code with best practices based on technical requirement.
--Hands-on experience in analysis, design, coding, and implementation of complex, custom-built software products.
--Should have experience in Database, preferably Redis.
--Experience in working with Git or equivalent code management / version control system with best practices.
--Good to have knowledge in Elasticsearch, AWS, Docker.
--Should have interest to explore and work on Cyber Security domain.
--Experience with Agile development methods.
--Should have strong analytical and logical skills.
--Should be good at fundamentals: Data Structures, Algorithms, Programming Languages, Distributed Systems, and Information retrieval.
--Should have good communication skills and client facing experience.
• Mandatory Modules: Inventory, Purchase Order, Order Management.
• Skill: Oracle Supply Chain Management Functional Consultant
• Hands-on experience in Oracle SCM 11i & R12 with good English (Written + Verbal) communication skills, team player, ready to travel / relocate & available at short notice.
• Should have extensively worked on Oracle EBS Support/implementation/upgrade Projects should be aware of the solution design, documentation, and AIM/OUM methodology.
• Expertise in Modules OM, Purchasing, Inventory.
• Industry experience in Supply Chain Industry will be an added advantage.
• Ready to work in Shifts by rotation if required.
*Skill Required: Oracle EBS SCM Functional*
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o8+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities
Skills:
Python
Job Description:
- Have good working knowledge on .Net, Asp.net, C#, MVC, Angular JS, Web (ASP.NET, JQuery, JavaScript), LINQ, HTML, CSS) dot net Core but not mandatory
- Strong MS SQL and ability to write procedure and SQL functions and queries.
- Keen to learn new technologies like Artificial Intelligence, Machine Learning.(Good to have enthu)
- Working exposure to Test, Link, JIRA, GIT, Jenkins, Visual Studio 2015 or later, OOPs concepts, Cloud concepts.
- Should have availability and flexibility to work collaboratively in a team.
- Strong Communication Skills
- Proven Track Record in being able to work independently and in team.
- Should be having experience on NET technologies VB.NET, ASP.NET, C#, ADO.NET, MSSQL, MVC, Web Services, WCF, entity framework WPF, WF and other .NET frameworks, SQL Server
Roles & Responsibilities:
- Expected to spend 80% of the time on hands-on development, design and remaining 20% on guiding on technology and resolving other impediments
- Translate detailed business requirements into optimal databases and solutions.
- Collaborate well with other developers, leads and data implementation specialists to design and create advanced, elegant and efficient products
- Significant knowledge of PL/SQL including tuning, triggers, and stored procedures
- Prepare and package scripts and code across development, test and QA environments
- Participate in change control planning for production deployments (assisting leads)
- Performs requirements gathering and analysis on projects (Participation)
- Addresses any requirements changes, improvements or project deliverable issues
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team




