11+ Hyperion Jobs in Chennai | Hyperion Job openings in Chennai
Apply to 11+ Hyperion Jobs in Chennai on CutShort.io. Explore the latest Hyperion Job opportunities across top companies like Google, Amazon & Adobe.
Job Description:
Proven experience with Hyperion Planning version 11.X version or EPM Cloud
Extensive experience in developing and maintaining Hyperion Planning and Essbase applications
Strong understanding of ASO and BSO cube development
Independently handle metadata build and security
Expert level knowledge in writing business rules
Knowledge of writing basic SQLs
Experience with SmartView, web forms, financial reports, MDX queries and MXL scripting
Oracle Relational Database experience and FDMEE experience is a plus
Deep functional and technical knowledge of financial systems and business processes, especially around planning, budgeting, forecasting
Apply structure knowledge to solve problems, break down issues and identify solutions.
Strong oral and written communication skills are essential for this role
The ability to work independently and be proactive
Knowledge of integration between external systems
Analytical and assessment skills essential
Proven experience in providing system support and direct contact with users to solve issues with business applications.
Candidate profile:
- The resource should be with Hyperion planning background.
- He should be ablet to understand the existing planning business rules with the help of Sutherland team.
- He should be able to independently develop/modify the Business rules. Sutherland team will provide the required design for the change requests.
- The resource is responsible for development, testing and production deployment and support after deployment.
Senior Software Engineer – SQL Server / T-SQL
Chennai | IIT Madras Research Park | Full-Time
About Novacis Digital
Novacis Digital is a product-first technology company building AI-driven platforms and large-scale data systems. Our products process complex, high-volume data to power real-time analytics and GenAI-driven experiences.
We don’t see SQL as “just a database layer” - we treat it as a core compute engine. If you love writing efficient SQL and solving performance problems, this is the role for you.
What You Will Do
· Design and build complex T-SQL stored procedures involving Dynamic SQL, along with views, functions, and triggers
· Implement flexible, metadata-driven query frameworks using sp_executesql and parameterized Dynamic SQL
· Engineer high-performance, set-based queries using CTEs, window functions, temp tables and table variables
· Optimize queries using execution plans, statistics and DMVs
· Refactor inefficient queries and redesign schemas for performance and scalability
· Solve real-world challenges related to locks, blocking, deadlocks and transaction isolation
· Collaborate with application engineers to build reliable, high-performance data access layers
What We’re Looking For
We’re looking for true SQL engineers — people who think in execution flow, logic and data behavior rather than just syntax.
You should have:
· 4+ years of deep hands-on experience with Microsoft SQL Server & T-SQL
· Strong expertise in:
o Stored Procedures (with Dynamic SQL)
o Views
o Functions
o Triggers
· Strong experience with:
o Dynamic SQL best practices and secure execution patterns
o Indexing strategies and query plan optimization
o Handling parameter sniffing and plan instability
· Strong knowledge of:
o Temp tables vs table variables
o Cardinality estimation
o Cost-based optimization concepts
Nice to Have
· Exposure to GenAI data pipelines or analytical architectures
· Exposure to Graph, Vector and No SQL Databases
How We Work
· We write production-grade T-SQL
· We value performance, clarity, and correctness
· We invest heavily in query readability and maintainability
· Engineering quality is non-negotiable
Apply Now
If you enjoy designing complex Dynamic SQL-powered stored procedures and tuning systems at scale, we’d like to talk.
What You’ll Do
• Build and scale backend services using Java & Spring Boot
• Work on API integrations (REST, SOAP), caching & rate limiting
• Contribute across the full SDLC – design, development, testing & deployment
• Solve problems around performance, scalability & reliability
What We’re Looking For
• Strong knowledge of Data Structures & Algorithms
• Experience with Java, Spring Boot, REST/SOAP
• Hands-on with system & solution design
• Database experience: MongoDB / PostgreSQL / MySQL / Oracle
• Good debugging skills & unit testing
• Familiarity with Git and AI coding assistants (Copilot, Claude, etc.)
We are looking for a passionate and experienced Java Developer with over 4 years of hands-on experience in building robust and scalable backend systems using Java, Spring Boot, and Microservices architecture.
Key Responsibilities:
- Design, develop, test, and deploy high-performance, scalable, and secure backend services using Java 8+, Spring Boot, and Microservices.
- Participate in the entire software development lifecycle (SDLC), from requirements gathering to production deployment and support.
- Collaborate with frontend developers, DevOps engineers, and product owners to deliver end-to-end features.
- Optimize application performance and ensure high availability and responsiveness.
- Write clean, maintainable, and reusable code while following best practices (coding standards, unit testing, CI/CD, etc.).
- Develop RESTful APIs and ensure their quality and consistency.
- Handle integration with external systems and third-party services.
- Participate in code reviews, provide constructive feedback, and mentor junior developers.
Technical Skills Required:
- Languages: Java 8 or above
- Frameworks: Spring Boot, Spring MVC, Spring Security
- Architecture: Microservices architecture and design patterns
- Databases: MySQL/PostgreSQL, MongoDB (optional)
- Messaging: Kafka, RabbitMQ (optional)
- API Development: RESTful APIs, Swagger/OpenAPI
- Version Control: Git, GitHub/GitLab
- Tools: Maven/Gradle, Jenkins, Docker
- Testing: JUnit, Mockito
- Cloud (optional): AWS/Azure/GCP exposure
- CI/CD & DevOps tools (optional): Jenkins, Docker, Kubernetes
Nice to Have:
- Experience in containerization and orchestration (Docker, Kubernetes).
- Knowledge of API Gateway, Service Registry (e.g., Eureka), and Circuit Breakers (e.g., Hystrix or Resilience4j).
- Experience with Agile methodologies (Scrum/Kanban).
- Familiarity with monitoring and logging tools (ELK, Prometheus, Grafana, etc.).
Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.
1. 3 + years of mobile development - Building and Designing advanced Android applications for Android platform
2. Translate designs and wireframes into high quality code
3. Should have knowledge in MVC/MVVM architecture.
4. Strong knowledge in android SDK, different versions of android and how to deal with different screen sizes.
5. Familiarity with REST ful API's to connect backend servers
6. Strong fundamental knowledge in Core Java and API and DB integrations using Android
Job Requirement
1. 3 + years of mobile development - Building and Designing advanced Android applications for Android platform
2. Translate designs and wireframes into high quality code
3. Should have knowledge in MVC/MVVM architecture.
4. Strong knowledge in android SDK, different versions of android and how to deal with different screen sizes.
5. Familiarity with REST ful API's to connect backend servers
6. Strong fundamental knowledge in Core Java and API and DB integrations using Android
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
- Experience in web development with AngularJS
- Experience working with RDBMS such as Oracle, SQL Server, PostgreSQL.
- Experience in SOAP/RESTful web services. javaScript, CSS, HTML, JSON, JQuery, AngularJS
Field Sales, Revenue 💵 - PagarBook
(Temporary Fulltime)
About Pagarbook:
1. Employee Management at your Fingertips. PagarBook - India's Best Payroll
and Attendance
2. Management tool for Small & Medium Enterprises. Using PagarBook, a
business owner can maintain all the records of their employees digitally
and can get insights around the same.
3. PagarBook is free & easy to use employee management, work & payroll
management software app where you can manage all your staff and
employee’s attendance, record the work done by your staff and employees
and their salary, payments & advances can also be recorded in this app.
4. Sms & WhatsApp notification to employees and staff about payments,
bonuses, daily work, attendance & leaves.
5. The Sales would be done for premium desktop version of the application
6. Benefits of desktop application:
a. Better and Easier accessibility, also available on mobile web
b. Access to rich reports which gives you business knowledge, spend
c. Unlimited free upgrades for a year
About Sales team:
The field sales team at PagarBook is all about solving SME’s customer problems
and ensuring we deliver compelling value propositions.
On a typical day, the sales team:
● drives the adoption of a premium version of Pagarbook, sold at a
subscription fee per annum
● never-say-no attitude, and comes with a win-all attitude
● At the same time, a core ingredient of the DNA is customer empathy
● We should always ensure the customer is well informed of all the benefits of
the solution, before we close the sales.
Responsibilities and Duties of sales associate:
● You will pitch about PagarBook desktop solution on field to customers
● You will explain the benefits of PagarBook desktop solution to customers
○ Better and Easier accessibility, also available on mobile web
○ Access to rich reports which gives you business knowledge, spend
○ Unlimited free upgrades for a year
● The users can register on the desktop for a free trial for 7 days first
● Sales associate would convert the customer into a paid customer
Mandatory requirements:
● Having your own bike is a must
● Field sales experience is a must (telecom, FMCG and financial sales
experience adds more weightage)
● Local language knowledge in the city of operations is a must
● Graduation in any field is sufficient
Benefits:
● Targets based incentives (extremely attractive)
● Petrol allowance ~upto INR 3,000 (on actual bills)
● Initial job posting would be for 3 months - based on good performance,
conversion to full time employment would be granted
Key Areas of Responsibility (KRAs):
● Onboarding of Customers
○ Meet 30 new clients a day (tracked)
○ Get free trial enabled for 20 clients a day
○ Get 3 sales (paid customers, 10% conversion) per day -
non-negotiable output
● Retaining customer accounts
○ While we build new businesses, cross-selling would be key
○ Customers should keep using PagarBook desktop beyond 30 days of
activation
● Grow customer basket size
○ Identify key SME clusters in the city and prepare an acquisition plan
○ Go aggressive and acquire
● Business acumen & key skills
○ Strong knowledge on PagarBook sales process, services and product
○ Local Market and competition knowledge and clear articulation of
PagarBook advantages
○ Strong negotiation and influencing skills to create win-win
○ Give continuous feedback to internal teams to improve our
customer service level
Finally, remember, you are the PagarBook brand ambassador on the ground! All
the best.
Requirements for field sales - Phase 1 (50):
● South:
○ Bangalore - 5 Associates
○ Coimbatore - 5 Associates





