11+ Health care administration Jobs in Hyderabad | Health care administration Job openings in Hyderabad
Apply to 11+ Health care administration Jobs in Hyderabad on CutShort.io. Explore the latest Health care administration Job opportunities across top companies like Google, Amazon & Adobe.
- Perform complex and varied applications support. Build, maintain & modify within a large environment;
- Identify issues, evaluate possible solutions, and make recommendations on the most appropriate action
- Design, build, test, and document solution-specific information and perform related work as required.
- Demonstrate functional knowledge in relevant clinical applications (OE-PCS-PHA-EDM-ITS-RAD-LAB-MIC-BBK-PTH)
- Demonstrate functional knowledge in Clinical Documentation
- Must have hands-on build and troubleshooting experience
- Must be able to quickly identify and resolve complex system issues
- Must be able to build using technical specifications.
Basic Qualifications:
- Implementation and support experience within a MEDITECH Environment
- Bachelors Degree in business, Healthcare Administration, Communication, Marketing related field or equivalent and relevant work experience
Preferred Qualifications:
- Knowledge of ITIL Incident, Problem, and Change management,
- Working knowledge of Agile, Product and Project methodologies
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Key Responsibilities:
- Collaborate with business stakeholders and data analysts to understand reporting requirements and translate them into effective Power BI solutions.
- Design and develop interactive and visually compelling dashboards, reports, and visualizations using Microsoft Power BI.
- Ensure data accuracy and consistency in the reports by working closely with data engineers and data architects.
- Optimize and streamline existing Power BI reports and dashboards for better performance and user experience.
- Develop and maintain data models and data connections to various data sources, ensuring seamless data integration.
- Implement security measures and data access controls to protect sensitive information in Power BI reports.
- Troubleshoot and resolve issues related to Power BI reports, data refresh, and connectivity problems.
- Stay updated with the latest Power BI features and capabilities, and evaluate their potential use in improving existing solutions.
- Conduct training sessions and workshops for end-users to promote self-service BI capabilities and enable them to create their own reports.
- Collaborate with the wider data and analytics team to identify opportunities for using Power BI to enhance business processes and decision-making.
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or a related field.
- Proven experience as a Power BI Developer or similar role, with a strong portfolio showcasing previous Power BI projects.
- Proficient in Microsoft Power BI, DAX (Data Analysis Expressions), and M (Power Query) to manipulate and analyze data effectively.
- Solid understanding of data visualization best practices and design principles to create engaging and intuitive dashboards.
- Strong SQL skills and experience with data modeling and database design concepts.
- Knowledge of data warehousing concepts and ETL (Extract, Transform, Load) processes.
- Ability to work with various data sources, including relational databases, APIs, and cloud-based platforms.
- Excellent problem-solving skills and a proactive approach to identifying and addressing issues in Power BI reports.
- Familiarity with data security and governance practices in the context of Power BI development.
- Strong communication and interpersonal skills to collaborate effectively with cross-functional teams and business stakeholders.
- Experience with other BI tools (e.g., Tableau, QlikView) is a plus.
The role of a Power BI Developer is critical in enabling data-driven decision-making and empowering business users to gain valuable insights from data. The successful candidate will have a passion for data visualization and analytics, along with the ability to adapt to new technologies and drive continuous improvement in BI solutions. If you are enthusiastic about leveraging the power of data through Power BI, we encourage you to apply and join our dynamic team.
- At least 4 to 7 years of relevant experience as Big Data Engineer
- Hands-on experience in Scala or Python
- Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
- Strong programming experience in building applications/platform using Scala or Python.
- Experienced in implementing Spark RDD Transformations, actions to implement business analysis
We are specialized in productizing solutions of new technology.
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
About Quadratyx:
We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.
We firmly believe in Excellence Everywhere.
Job Description
Purpose of the Job/ Role:
• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.
Key Requisites:
• Expertise in Data structures and algorithms.
• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.
• Scaling of cloud-based infrastructure.
• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.
• Led and mentored a team of data engineers.
• Hands-on experience in test-driven development (TDD).
• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.
• Good knowledge of Kafka and Spark Streaming internal architecture.
• Good knowledge of any Application Servers.
• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.
• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc.
Skills/ Competencies Required
Technical Skills
• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.
• Clear end-to-end experience in designing, programming, and implementing large software systems.
• Passion and analytical abilities to solve complex problems Soft Skills.
• Always speaking your mind freely.
• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.
• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.
Academic Qualifications & Experience Required
Required Educational Qualification & Relevant Experience
• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.
• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.
A Business Transformation Organization that partners with businesses to co–create customer-centric hyper-personalized solutions to achieve exponential growth. Invente offers platforms and services that enable businesses to provide human-free customer experience, Business Process Automation.
Location: Hyderabad (WFO)
Budget: Open
Position: Azure Data Engineer
Experience: 5+ years of commercial experience
Responsibilities
● Design and implement Azure data solutions using ADLS Gen 2.0, Azure Data Factory, Synapse, Databricks, SQL, and Power BI
● Build and maintain data pipelines and ETL processes to ensure efficient data ingestion and processing
● Develop and manage data warehouses and data lakes
● Ensure data quality, integrity, and security
● Implement from existing use cases required by the AI and analytics teams.
● Collaborate with other teams to integrate data solutions with other systems and applications
● Stay up-to-date with emerging data technologies and recommend new solutions to improve our data infrastructure
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
at Altimetrik
Big Data Engineer: 5+ yrs.
Immediate Joiner
- Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
- Experience in developing lambda functions with AWS Lambda
- Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
- Should be able to code in Python and Scala.
- Snowflake experience will be a plus
- We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
Job Description
We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Skills
- Bachelors/Masters/Phd in CS or equivalent industry experience
- Demonstrated expertise of building and shipping cloud native applications
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
JOB DESCRIPTION
- 2 to 6 years of experience in imparting technical training/ mentoring
- Must have very strong concepts of Data Analytics
- Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
- Must have good knowledge of SQL and Advanced SQL
- Should have basic knowledge of Statistics
- Should be good in Operating systems GNU/Linux, Network fundamentals,
- Must have knowledge on MS office (Excel/ Word/ PowerPoint)
- Self-Motivated and passionate about technology
- Excellent analytical and logical skills and team player
- Must have exceptional Communication Skills/ Presentation Skills
- Good Aptitude skills is preferred
- Exceptional communication skills
Responsibilities:
- Ability to quickly learn any new technology and impart the same to other employees
- Ability to resolve all technical queries of students
- Conduct training sessions and drive the placement driven quality in the training
- Must be able to work independently without the supervision of a senior person
- Participate in reviews/ meetings
Qualification:
- UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
- PG: MCA/MS/MSC – Computer Science
- Any Graduate/ Post graduate, provided they are certified in similar courses
ABOUT EDUBRIDGE
EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.
Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.
To know more about EduBridge please visit: http://www.edubridgeindia.com/">http://www.edubridgeindia.com/
You can also visit us on https://www.facebook.com/Edubridgelearning/">Facebook , https://www.linkedin.com/company/edubridgelearning/">LinkedIn for our latest initiatives and products