Cutshort logo
Apache Drill Jobs in Bangalore (Bengaluru)

11+ Apache Drill Jobs in Bangalore (Bengaluru) | Apache Drill Job openings in Bangalore (Bengaluru)

Apply to 11+ Apache Drill Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Apache Drill Job opportunities across top companies like Google, Amazon & Adobe.

icon
Uber

at Uber

1 video
10 recruiters
Suvidha Chib
Posted by Suvidha Chib
Bengaluru (Bangalore)
7 - 15 yrs
₹0L / yr
Big Data
Hadoop
kafka
Spark
Apache Hive
+9 more

Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities: 

 

  • Demonstrate tech expertise Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions. 
  • Execute large scale projects Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
  • Collaborate across teams Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
  • Coach engineers Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
  • Tech leadership Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.


What You’ll Do

  • Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
  • Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data. 
  • Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.

 

What You’ll Need

  • 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
  • Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
  • Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
  • Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
  • Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
  • A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
  • Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Read more
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹35L / yr
SQL
skill iconPython
Metrics management
skill iconData Analytics

Responsibilities

  • Work with large and complex blockchain data sets and derive investment relevant metrics in close partnership with financial analysts and blockchain engineers.
  • Apply knowledge of statistics, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to the development of fundamental metrics needed to evaluate various crypto assets.
  • Build a strong understanding of existing metrics used to value various decentralized applications and protocols.
  • Build customer facing metrics and dashboards.
  • Work closely with analysts, engineers, Product Managers and provide feedback as we develop our data analytics and research platform.

Qualifications

  • Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience (or) degree in an analytical field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research, Management Science)
  • 3+ years experience with data analysis and metrics development
  • 3+ years experience analyzing and interpreting data, drawing conclusions, defining recommended actions, and reporting results across stakeholders
  • 2+ years experience writing SQL queries
  • 2+ years experience scripting in Python
  • Demonstrated curiosity in and excitement for Web3/blockchain technologies
Read more
Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
4 - 13 yrs
₹7L - ₹18L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.


Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Viswanath Subramanian
Posted by Viswanath Subramanian
Remote, Chennai, Bengaluru (Bangalore), Mumbai
3 - 7 yrs
₹12L - ₹25L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
skill iconR Programming
+5 more

Ganit has flipped the data science value chain as we do not start with a technique but for us, consumption comes first. With this philosophy, we have successfully scaled from being a small start-up to a 200 resource company with clients in the US, Singapore, Africa, UAE, and India. 

We are looking for experienced data enthusiasts who can make the data talk to them. 

 

You will: 

  • Understand business problems and translate business requirements into technical requirements. 
  • Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it. 
  • Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization. 
  • Gather requirements and communicate findings in the form of a meaningful story with the stakeholders  
  • Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption. 
  • Lead and mentor data analysts. 

 

We are looking for someone who has: 

 

  • Apart from your love for data and ability to code even while sleeping you would need the following. 
  • Minimum of 02 years of experience in designing and delivery of data science solutions. 
  • You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off. 
  • Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand. 
  • Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc. 
  • Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics 

Skillset Required:

  • Regression
  • Classification
  • Predictive Modelling
  • Prescriptive Modelling
  • Python
  • R
  • Descriptive Modelling
  • Time Series
  • Clustering
  •  

What is in it for you: 

 

  • Be a part of building the biggest brand in Data science. 
  • An opportunity to be a part of a young and energetic team with a strong pedigree. 
  • Work on awesome projects across industries and learn from the best in the industry, while growing at a hyper rate. 

 

Please Note:  

 

At Ganit, we are looking for people who love problem solving. You are encouraged to apply even if your experience does not precisely match the job description above. Your passion and skills will stand out and set you apart—especially if your career has taken some extraordinary twists and turns over the years. We welcome diverse perspectives, people who think rigorously and are not afraid to challenge assumptions in a problem. Join us and punch above your weight! 

Ganit is an equal opportunity employer and is committed to providing a work environment that is free from harassment and discrimination. 

All recruitment, selection procedures and decisions will reflect Ganit’s commitment to providing equal opportunity. All potential candidates will be assessed according to their skills, knowledge, qualifications, and capabilities. No regard will be given to factors such as age, gender, marital status, race, religion, physical impairment, or political opinions. 

Read more
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
MNC

at MNC

Agency job
via Fragma Data Systems by Harpreet kour
Bengaluru (Bangalore)
5 - 9 yrs
₹16L - ₹20L / yr
Apache Hadoop
Hadoop
Apache Hive
HDFS
SSL
+1 more
  • Responsibilities
         - Responsible for implementation and ongoing administration of Hadoop
    infrastructure.
         - Aligning with the systems engineering team to propose and deploy new
    hardware and software environments required for Hadoop and to expand existing
    environments.
         - Working with data delivery teams to setup new Hadoop users. This job includes
    setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig
    and MapReduce access for the new users.
         - Cluster maintenance as well as creation and removal of nodes using tools like
    Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
         - Performance tuning of Hadoop clusters and Hadoop MapReduce routines
         - Screen Hadoop cluster job performances and capacity planning
         - Monitor Hadoop cluster connectivity and security
         - Manage and review Hadoop log files.
         - File system management and monitoring.
         - Diligently teaming with the infrastructure, network, database, application and
    business intelligence teams to guarantee high data quality and availability
         - Collaboration with application teams to install operating system and Hadoop
    updates, patches, version upgrades when required.
        
    READ MORE OF THE JOB DESCRIPTION 
    Qualifications
    Qualifications
         - Bachelors Degree in Information Technology, Computer Science or other
    relevant fields
         - General operational expertise such as good troubleshooting skills,
    understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,
    storage, and networks.
         - Hadoop skills like HBase, Hive, Pig, Mahout
         - Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,
    monitor critical parts of the cluster, configure name node high availability, schedule
    and configure it and take backups.
         - Good knowledge of Linux as Hadoop runs on Linux.
         - Familiarity with open source configuration management and deployment tools
    such as Puppet or Chef and Linux scripting.
         Nice to Have
         - Knowledge of Troubleshooting Core Java Applications is a plus.

Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹10L / yr
Big Data
Hadoop
Apache Spark
Spark
Apache Kafka
+11 more

We are looking for a savvy Data Engineer to join our growing team of analytics experts. 

 

The hire will be responsible for:

- Expanding and optimizing our data and data pipeline architecture

- Optimizing data flow and collection for cross functional teams.

- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates

- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.

- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.

Nice to have experience with :

- Big data tools: Hadoop, Spark and Kafka

- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow

- Stream-processing systems: Storm

Database : SQL DB

Programming languages : PL/SQL, Spark SQL

Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.

The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.

Read more
PAGO Analytics India Pvt Ltd
Vijay Cheripally
Posted by Vijay Cheripally
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
2 - 8 yrs
₹8L - ₹15L / yr
skill iconPython
PySpark
Microsoft Windows Azure
SQL Azure
skill iconData Analytics
+6 more
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Read more
Yulu Bikes

at Yulu Bikes

1 video
3 recruiters
Keerthana k
Posted by Keerthana k
Bengaluru (Bangalore)
1 - 2 yrs
₹7L - ₹12L / yr
skill iconData Science
skill iconData Analytics
SQL
skill iconPython
Datawarehousing
+2 more
Skill Set 
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.

JD 

- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.

- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.

- Technical expertise with data models, database design and development, data mining and segmentation techniques

- Proven success in a collaborative, team-oriented environment

- Working experience with geospatial data will be a plus.
Read more
SigTuple

at SigTuple

1 video
5 recruiters
Sneha Chakravorty
Posted by Sneha Chakravorty
Bengaluru (Bangalore)
2 - 6 yrs
₹4L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
We are looking for highly passionate and enthusiastic players for solving problems in medical data analysis using a combination of image processing, machine learning and deep learning. As a Senior Computer Scientist at SigTuple you will have the onus of creating and leveraging the state-of-the-art algorithms in machine learning, image processing and AI which will impact billions of people across the world by creating healthcare solutions that are accurate and affordable. You will collaborate with our current team of super awesome geeks in cracking super complex problems in a simple way by creating experiments, algorithms and prototypes that not only yield high-accuracy but are also designed and engineered to scale. We believe in innovation - needless to say that you will be part of creating intellectual properties like patents and contributing to the research communities by publishing papers - it is something that we value the most What we are looking for: · Hands on experience along with a strong understanding of foundational algorithms in either machine learning, computer vision or deep learning. Prior experience of applying these techniques on images and videos would be good-to-have. · Hands on experience in building and implementing advanced statistical analysis and machine learning and data mining algorithms. · Programming experience in C, C++, Python What should you have: · 2 - 5 years of relevant experience in solving problems using machine learning or computer vision · Bachelor degree or Master degree or PhD in computer science or related fields. · Be an innovative and creative thinker, somebody who is not afraid to try something new and inspire others to do so. · Thrive in a fast-paced and fun environment. · Work with a bunch of data scientist geeks and disruptors striving for a big cause. What SigTuple can offer: You will be working with an incredible team of smart & supportive people, driven by a common force to change things for the better. With an opportunity to deliver high-calibre mobile and desktop solutions integrated with hardware that will transform healthcare ground up, there will ultimately be different challenges for you to face. Sufficient to say that if you thrive in these environments, the buzz alone will keep you energized. In short you will snag a place at the table of one of the most vibrant start-ups in the industry!!
Read more
zeotap India Pvt Ltd

at zeotap India Pvt Ltd

2 recruiters
Projjol Banerjea
Posted by Projjol Banerjea
Bengaluru (Bangalore)
6 - 10 yrs
₹5L - ₹40L / yr
skill iconPython
Big Data
Hadoop
skill iconScala
Spark
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort