Cutshort logo

11+ HiveQL Jobs in Pune | HiveQL Job openings in Pune

Apply to 11+ HiveQL Jobs in Pune on CutShort.io. Explore the latest HiveQL Job opportunities across top companies like Google, Amazon & Adobe.

icon
Maveric Systems

at Maveric Systems

3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 8 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
SQL
Google Cloud Platform (GCP)
+3 more

Who We Are:

DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.

What You’ll Do:

We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.  

This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.  

  • Serve as the Engineering interface between Analytics and Engineering teams
  • Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
  • Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
  • Build “mastered” versions of the data for Analytics specific querying use cases
  • Help with data ETL, table performance optimization
  • Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
  • Build & operate scalable and robust data architectures
  • Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
  • Implement DataOps practices
  • Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
  • Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
  • Operate between Engineers and Analysts to unify both practices for analytics insight creation

Who You Are:

  • Adept in market research methodologies and using data to deliver representative insights
  • Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
  • Deep SQL experience is a must
  • Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
  • English Language Fluency and proven success working with teams in the U.S.
  • Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
  • Experience working with public clouds like GCP/AWS
  • Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
  • Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
  • Proficient with SQL,Python or JVM based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc         
  • Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
  • Comfortable to work in EST Time Zone


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
2 - 5 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
SQL
skill iconJava
+1 more

Who You Are:


- In-depth and strong knowledge of SQL.

- Basic knowledge of Java.

- Basic scripting knowledge.

- Strong analytical skills.

- Excellent debugging skills and problem-solving.


What You’ll Do:


- Comfortable working in EST+IST Timezone

- Troubleshoot complex issues discovered in-house as well as in customer environments.

- Replicate customer environments/issues on Platform and Data and work to identify the root cause or provide interim workaround as needed.

- Ability to debug SQL queries associated with Data pipelines.

- Monitoring and debugging ETL jobs on a daily basis.

- Provide Technical Action plans to take a customer/product issue from start to resolution.

- Capture and document any Data incidents identified on Platform and maintain the history of such issues along with resolution.

- Identify product bugs and improvements based on customer environments and work to close them

- Ensure implementation/continuous improvement of formal processes to support product development activities.

- Good in external and internal communication across stakeholders.

Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Bengaluru (Bangalore), Pune, Mumbai
6 - 8 yrs
₹25L - ₹28L / yr
ETL
Informatica
Data Warehouse (DWH)
ETL management
SQL
+1 more

Your key responsibilities

 

  • Create and maintain optimal data pipeline architecture. Should have experience in building batch/real-time ETL Data Pipelines. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • The individual will be responsible for solution design, integration, data sourcing, transformation, database design and implementation of complex data warehousing solutions.
  • Responsible for development, support, maintenance, and implementation of a complex project module
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support.
  • complete reporting solutions.
  • Preparation of HLD about architecture of the application and high level design.
  • Preparation of LLD about job design, job description and in detail information of the jobs.
  • Preparation of Unit Test cases and execution of the same.
  • Provide technical guidance and mentoring to application development teams throughout all the phases of the software development life cycle

Skills and attributes for success

 

  • Strong experience in SQL. Proficient in writing performant SQL working with large data volumes. Proficiency in writing and debugging complex SQLs.
  • Strong experience in database system Microsoft Azure. Experienced in Azure Data Factory.
  • Strong in Data Warehousing concepts. Experience with large-scale data warehousing architecture and data modelling.
  • Should have enough experience to work on Power Shell Scripting
  • Able to guide the team through the development, testing and implementation stages and review the completed work effectively
  • Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation
  • Primary owner of delivery, timelines. Review code was written by other engineers.
  • Maintain highest levels of development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability
  • Must have understanding of business intelligence development in the IT industry
  • Outstanding written and verbal communication skills
  • Should be adept in SDLC process - requirement analysis, time estimation, design, development, testing and maintenance
  • Hands-on experience in installing, configuring, operating, and monitoring CI/CD pipeline tools
  • Should be able to orchestrate and automate pipeline
  • Good to have : Knowledge of distributed systems such as Hadoop, Hive, Spark

 

To qualify for the role, you must have

 

  • Bachelor's Degree in Computer Science, Economics, Engineering, IT, Mathematics, or related field preferred
  • More than 6 years of experience in ETL development projects
  • Proven experience in delivering effective technical ETL strategies
  • Microsoft Azure project experience
  • Technologies: ETL- ADF, SQL, Azure components (must-have), Python (nice to have)

 

Ideally, you’ll also have

Read more
Celebal Technologies

at Celebal Technologies

2 recruiters
Payal Hasnani
Posted by Payal Hasnani
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
5 - 15 yrs
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Pune
5 - 9 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more
This role is for a developer with strong core application or system programming skills in Scala, java and
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with

For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
Read more
Simplifai Cognitive Solutions Pvt Ltd
Priyanka Malani
Posted by Priyanka Malani
Pune
2 - 15 yrs
₹10L - ₹30L / yr
Spark
Big Data
Apache Spark
skill iconPython
PySpark
+1 more

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Read more
A2Tech Consultants

at A2Tech Consultants

3 recruiters
Dhaval B
Posted by Dhaval B
Pune
4 - 12 yrs
₹6L - ₹15L / yr
Data engineering
Data Engineer
ETL
Spark
Apache Kafka
+5 more
We are looking for a smart candidate with:
  • Strong Python Coding skills and OOP skills
  • Should have worked on Big Data product Architecture
  • Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
  • NoSQL-based databases such as Cassandra, Elasticsearch etc.
  • Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
  • Experience on development of ETL for data product
  • Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
  • Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
  • Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
  • Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
Key Skills:
  • Python and Scala (Optional), Spark / PySpark, Parallel programming
Read more
Pune
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Big Data
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more
  • Must have 5-8 years of experience in handling data
  • Must have the ability to interpret large amounts of data and to multi-task
  • Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
  • Must have strong analytical and critical thinking to resolve business problems using data and tech
  •  Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
  • Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Must have good communication skills
  • Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Read more
Techknomatic Services Pvt. Ltd.
Techknomatic Services
Posted by Techknomatic Services
Pune, Mumbai
2 - 6 yrs
₹4L - ₹9L / yr
Tableau
SQL
Business Intelligence (BI)
Role Summary:
Lead and drive the development in BI domain using Tableau eco-system with deep technical and BI ecosystem knowledge. The resource will be responsible for the dashboard design, development, and delivery of BI services using Tableau eco-system.

Key functions & responsibilities:
 Communication & interaction with the Project Manager to understand the requirement
 Dashboard designing, development and deployment using Tableau eco-system
 Ensure delivery within a given time frame while maintaining quality
 Stay up to date with current tech and bring relevant ideas to the table
 Proactively work with the Management team to identify and resolve issues
 Performs other related duties as assigned or advised
 He/she should be a leader that sets the standard and expectations through an example in
his/her conduct, work ethic, integrity and character
 Contribute in dashboard designing, R&D and project delivery using Tableau

Candidate’s Profile
Academics:
 Batchelor’s degree preferable in Computer science.
 Master’s degree would have an added advantage.

Experience:
 Overall 2-5 Years of experience in DWBI development projects, having worked on BI and
Visualization technologies (Tableau, Qlikview) for at least 2 years.
 At least 2 years of experience covering Tableau implementation lifecycle including hands-on development/programming, managing security, data modelling, data blending, etc.

Technology & Skills:
 Hands on expertise of Tableau administration and maintenance
 Strong working knowledge and development experience with Tableau Server and Desktop
 Strong knowledge in SQL, PL/SQL and Data modelling
 Knowledge of databases like Microsoft SQL Server, Oracle, etc.
 Exposure to alternate Visualization technologies like Qlikview, Spotfire, Pentaho etc.
 Good communication & Analytical skills with Excellent creative and conceptual thinking
abilities
 Superior organizational skills, attention to detail/level of quality, Strong communication
skills, both verbal and written
Read more
Computer Power Group Pvt Ltd
Bengaluru (Bangalore), Chennai, Pune, Mumbai
7 - 13 yrs
₹14L - ₹20L / yr
skill iconR Programming
skill iconPython
skill iconData Science
SQL server
Business Analysis
+3 more
Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort