11+ Performance analysis Jobs in Chennai | Performance analysis Job openings in Chennai
Apply to 11+ Performance analysis Jobs in Chennai on CutShort.io. Explore the latest Performance analysis Job opportunities across top companies like Google, Amazon & Adobe.
Roles and Responsibilities:
· Support the design and implementation of performance management systems, including goal-setting, performance evaluations, and employee development plans.
· Need to conduct talent mapping aligned with the MHFA action plan.
· Handle the performance management cycle process from start to end and monitor timely and accurate completion of the appraisals (e.g. forms and templates, communications).
· Analyse performance data and generate reports to identify trends, opportunities, and areas for improvement.
· Provide guidance and support to the team on skill development, competencies and to assess skill gap
· Develop and work with new staff on Key Result Areas (KRAs) and Key Performance Indicator (KPIs) aligned with organizational goals
· To Develop and update job descriptions and job specifications.
· To keep job descriptions up-to-date, accurate and compliant with relevant details for all positions.
· Conduct and Coordinate the interview process, collaborating with hiring managers to understand their specific needs.
· To Monitor progress of tasks and key HR metrics.
· Develop individual career maps for each staff member, working closely with them to coach and support their development.
· Provide regular updates to management on the team's progress with their tasks and tie management performance accordingly.
· Work on building HR systems and in charge of the automation of HR activities, processes and information.
· Thorough understanding of the HRMS portal.
Job Requirements:
- Minimum of 5 to 7 years of relevant work experience in Human Resources with proven ability to manage a set of processes and to manage teams.
- Minimum of a Bachelor's degree in a relevant field is required e.g. Human Resources, Business Administration.
· Strong communication, interpersonal, and organizational skills.
· Strong problem-solving and decision-making abilities.
· Demonstrated ability to handle confidential information with discretion.
· Proficiency with Microsoft Office / Google Docs / Spreadsheet
- Full-stack developer good at react.js and node.js.
- Should be good at algorithms
- Working experience in AWS is a plus
Shortcastle is a sportstech company with presence in over 65+ countries. Flexible and fun-filled team
Greetings!
Hiring the below position for one of our premium client.
Role: PEGA Developer
Experience: 5-15 years
Location: PAN India
Certification: CSA & CSSA mandatory
- 4.5 - 9 of experience in Pega Development with Pega relevant certification.
- Working experience in Pega PRPC and Pega layer cake structure, class structure rule resolution ruleset structure, portals views and harness components.
- Basic data storage and database query knowledge through Pega.
- Good knowledge around integration and exposure to REST and SOAP APIs.
- Good to have knowledge and experience on at least one Pega Industry framework.
- Should be able to track and manage own tasks.
- Should have very good verbal and written communication
#HiringAlert
We are looking "Hyperion Planning" for Reputed Client @ Chennai Permanent Role.
• Experience: 6+ yrs
Skills :
•Proven experience with Hyperion Planning version 11.X version or EPM Cloud
•Extensive experience in developing and maintaining Hyperion Planning and Essbase applications
•Strong understanding of ASO and BSO cube development
•Independently handle metadata build and security
•Expert level knowledge in writing business rules
•Knowledge of writing basic SQLs
•Experience with SmartView, web forms, financial reports, MDX queries and MXL scripting
•Oracle Relational Database experience and FDMEE experience is a plus
•Deep functional and technical knowledge of financial systems and business processes, especially around planning, budgeting, forecasting
•Apply structure knowledge to solve problems, break down issues and identify solutions.
•Strong oral and written communication skills are essential for this role
•The ability to work independently and be proactive
•Knowledge of integration between external systems
•Analytical and assessment skills essential
•Proven experience in providing system support and direct contact with users to solve issues with business applications
Location : Chennai & WFH
Work timing: 11 AM to 8 PM
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
different channels on the field
● Develop and maintain relationships with partnered institutions and repeat business
and referral/s
● Arrange and plan events to generate leads, handle product queries and
service issues in the partnered institutions
● Meet clients, verify documents, process files, coordinate for sanction /
disbursement of loans, personalized service to clients
● Ensure the achievement of a given business target in your territory
•The Person should review on HOB (Health of Business) and take quick measures reg concern areas and have process/check points
•The person should effectively work in collaboration with cross-functional resources like HR, Ops, etc... .
• The person should come up with innovative approach towards the HOW / HOW ELSE aspect of achieving the goal.
•The person should enable the direct reportees in interventions which focuses on their development.
•The person should assess the branch culture and enable high performing work culture in the branch.
Key Competencies
•Team Leadership
•Planning and Organizing
•Conceptual thinking
Transition Plan
•ability to select and train team members
•ability to deploy and redeploy resources within the unit.
Span of
Job Description :-
- Have intermediate/advanced knowledge of Python.
- Hands-on experience with OOP in Python. Flask/Django framework, ORM with MySQL, MongoDB is a plus.
- Must have experience of writing shell scripts and configuration files. Should be proficient in bash.
- Should have excellent Linux administration capabilities.
- Working experience of SCM. Git is preferred.
- Should have knowledge of the basics of networking in Linux, and computer networks in general.
- Experience in engineering practices such as code refactoring, design patterns, design driven development, Continuous Integration.
- Understanding of Architecture of OpenStack/Kubernetes and good knowledge of standard client interfaces is a plus.
- Code contributed to OpenStack/Kubernetes community will be plus.
- Understanding of NFV and SDN domain will be plus.
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
Job Dsecription: (8-12 years)
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Deep understanding of Kernel, Networking and OS fundamentals
○ Strong experience in writing helm charts.
○ Deep understanding of K8s.
○ Good knowledge in service mesh.
○ Good Database understanding
Notice Period: 30 day max




