Cutshort logo
Index tuning wizard jobs

11+ Index Tuning Wizard Jobs in India

Apply to 11+ Index Tuning Wizard Jobs on CutShort.io. Find your next job, effortlessly. Browse Index Tuning Wizard Jobs and apply today!

icon
Editorialistyx

at Editorialistyx

1 recruiter
Amrita Patwal
Posted by Amrita Patwal
Remote only
6 - 10 yrs
₹10L - ₹15L / yr
skill iconElastic Search
Solr
Federated search
Auto suggestion
recommendation search
+2 more
PRODUCT
  • Editorialist YX is looking for a Technical Architect - Search. As part of this role, you will work with a team that builds a unified search platform to power various searches for our .com website,IOS app, and internal support tools. This search impacts thousands of customers in a day and will also become pivotal to our tech efforts as we continue to grow 30x YoY.

  • You will own the technical direction for the team, and you will be leading key search projects from ideation all the way to deployment. You will be working closely with both technical and business leaders to fulfill your mission.
  • Salary is no bar for the relevant candidate. 


QUALIFICATION

  • 6+ years of experience working in java and web services.
  • 6+ years of experience working in the Search domain.
  • Proven skills in designing scalable, highly available distributed systems which can handle high data volumes.
  • Strong understanding of software engineering principles and fundamentals including data structures and algorithms.
  • Solid understanding of concurrency and multi-threading, multiple design patterns, and debugging and analytical methodologies.
  • Hands-on experience on Solr Cloud or ElasticSearch.
  • Deep understanding of information retrieval concepts.
  • Deep understanding of Linguistic processing like tokenizers, spellers, and stemmers.
  • Hands-on experience on big data tech stacks, like Hadoop, Hive, Cassandra, and Spark is a plus.
  • Self-directed, self-motivated, and detail-oriented with the ability to come up with good design proposals and thorough analysis of production issues.
  • Excellent written and oral communication skills on both technical and non-technical topics.

RESPONSIBILITIES

  • Designing and building a search engine using elastic search with engineers in the team for overall success for Search and other ML-based systems.
  • Collaborate with peers from other Engineering groups to tackle complex and meaningful problems with efficient and scalable delivery of Search solutions.
  • You are expected to be self-motivated, dedicated, and a solution-oriented individual. The main responsibilities for this position include:Leading effort to build large-scale, distributed, and highly available systems and pipelines.
  • Leading effort to build large scale and highly available information retrieval systems
  • Design and develop solutions using Java tech stack.
  • Design and implement as per secure guidelines
  • Work with QA to identify issues and fix them.


EDUCATION & EXPERIENCE

  • B.Tech. in Computer Science or equivalent experience
  • 6+ Yrs of experience in Java, Web services & Search Domain
  • Experience working in Product based company

BENEFITS

  • Retiral Benefits
  • Medical Insurance
  • Remote Working Opportunity for the time being
  • MacBook
  • Stock Options
  • Gym Membership
Read more
AI-powered cloud-based SaaS solution
Bengaluru (Bangalore)
2 - 10 yrs
₹15L - ₹50L / yr
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
+18 more
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
6sense

at 6sense

15 recruiters
Romesh Rawat
Posted by Romesh Rawat
Remote only
5 - 8 yrs
₹30L - ₹45L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more

About Slintel (a 6sense company) :

Slintel, a 6sense company,  the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.

Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.

Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.

We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.

6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.

6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.

Linkedin (Slintel) : https://www.linkedin.com/company/slintel/">https://www.linkedin.com/company/slintel/

Industry : Software Development

Company size : 51-200 employees (189 on LinkedIn)

Headquarters : Mountain View, California

Founded : 2016

Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.

Website (Slintel) : https://www.slintel.com/slintel">https://www.slintel.com/slintel

Linkedin (6sense) : https://www.linkedin.com/company/6sense/">https://www.linkedin.com/company/6sense/

Industry : Software Development

Company size : 501-1,000 employees (937 on LinkedIn)

Headquarters : San Francisco, California

Founded : 2013

Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales

Website (6sense) : https://6sense.com/">https://6sense.com/

Acquisition News : 

https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/ 

Funding Details & News :

Slintel funding : https://www.crunchbase.com/organization/slintel">https://www.crunchbase.com/organization/slintel

6sense funding : https://www.crunchbase.com/organization/6sense">https://www.crunchbase.com/organization/6sense

https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round">https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round

https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round">https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round

https://xipometer.com/en/company/6sense">https://xipometer.com/en/company/6sense

Slintel & 6sense Customers :

https://www.featuredcustomers.com/vendor/slintel/customers

https://www.featuredcustomers.com/vendor/6sense/customers">https://www.featuredcustomers.com/vendor/6sense/customers

About the job

Responsibilities

  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems

Requirements

  • 3+ years of experience in a Data Engineer role
  • Proficiency in Linux
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
  • Must have experience with Python/ Scala
  • Must have experience with Big Data technologies like Apache Spark
  • Must have experience with Apache Airflow
  • Experience with data pipeline and ETL tools like AWS Glue
  • Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake

 

Desired Skills and Experience

Python, SQL, Scala, Spark, ETL

 

Read more
Agilisium
Agency job
via Recruiting India by Moumita Santra
Chennai
10 - 19 yrs
₹12L - ₹40L / yr
Big Data
Apache Spark
Spark
PySpark
ETL
+1 more

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Agency job
via Fragma Data Systems by Minakshi Kumari
Remote only
7 - 13 yrs
₹15L - ₹35L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Experience
Experience Range

2 Years - 10 Years

Function Information Technology
Desired Skills
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Education
Education Type Engineering
Degree / Diploma Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering
Specialization / Subject Any Specialisation
Job Type Full Time
Job ID 000018
Department Software Development
Read more
Chennai
3 - 5 yrs
₹5L - ₹10L / yr
Big Data
Hadoop
Apache Kafka
Apache Hive
Microsoft Windows Azure
+1 more

Client  An IT Services Major, hiring for a leading insurance player.

 

 

Position: SENIOR CONSULTANT

 

Job Description:

 

  • Azure admin- senior consultant with HD Insights(Big data)

 

Skills and Experience

 

  • Microsoft Azure Administrator certification
  • Bigdata project experience in Azure HDInsight Stack. big data processing frameworks such as Spark, Hadoop, Hive, Kafka or Hbase.
  • Preferred: Insurance or BFSI domain experience
  • 5 to 5 years of experience is required.
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Garima Walia
Posted by Garima Walia
Remote only
4 - 7 yrs
₹10L - ₹14L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Job ID: RP100

Work Location: Remote

Required Experience: 4 to 7 years

Job Description

  • Must have Google Cloud Big Query experience
  • Strong experience with data analysis, data modeling and governance, with excellent analytical and problem-solving abilities
  • Good knowledge of Data Warehouses, data flow ETL pipelines
  • Design, configuration/administration of database software in Cloud platform.
  • Monitoring, Troubleshooting, and Performance tuning the DB objects.
  • Experience on Table Partition, Clustered Table, Materialized View, External Tables etc.

Anyone RDBMS technologies

  • Good experience in DB design with knowledge of ER Diagram, PK/FK, Stored procedure, Function, Triggers, and Indexes.
  • Understanding the requirement of the App team and creating the necessary DB objects by following the best practices.
  • Managing logins and database users, as well as database roles, application roles, and other security principles within the database.
  • Deep knowledge about Indexes, Performance tuning, and Complex SQL Query patterns.
  • Monitoring, Tuning, and Troubleshooting the database-related issues.

About Us:

Mobile programming LLC is a US-based digital transformation company. We help enterprises transform ideas into innovative and intelligent solutions, governing the Internet of Things, Digital Commerce, Business Intelligence Analytics, and Cloud Programming. Bring your challenges to us, we will give you the smartest solutions. From conceptualizing and engineering to advanced manufacturing, we help customers build and scale products fit for the global marketplace.

Mobile programming LLC has offices located in Los Angeles, San Jose, Glendale, San Diego, Phoenix, Plano, New York, Fort Lauderdale, and Boston. Mobile programming is SAP Preferred Vendor, Apple Adjunct Partner, Google Empaneled Mobile Vendor, and Microsoft Gold Certified Partner.

Read more
Bungee Tech India
Abigail David
Posted by Abigail David
Remote, NCR (Delhi | Gurgaon | Noida), Chennai
5 - 10 yrs
₹10L - ₹30L / yr
Big Data
Hadoop
Apache Hive
Spark
ETL
+3 more

Company Description

At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering. 

 

We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.

 

Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.

We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

You will also be responsible for integrating them with the architecture used in the company.

 

We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.

 

Responsibilities

As an experienced member of the team, in this role, you will:

 

  • Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development

 

  • You will research, design and code, troubleshoot and support. What you create is also what you own.

 

  • Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.

 

  • Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.

 

BASIC QUALIFICATIONS

  • Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
  • 5+ years relevant professional experience in Data Engineering and Business Intelligence
  • 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
  • Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
  • Ability to effectively communicate with both business and technical teams.
  • Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
  • Understanding of relational and non-relational databases and basic SQL
  • Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script

 

PREFERRED QUALIFICATIONS

 

  • Experience with building data pipelines from application databases.
  • Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
  • Experience working with Data Lakes.
  • Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
  • Sharp problem solving skills and ability to resolve ambiguous requirements
  • Experience on working with Big Data
  • Knowledge and experience on working with Hive and the Hadoop ecosystem
  • Knowledge of Spark
  • Experience working with Data Science teams
Read more
INSOFE

at INSOFE

1 recruiter
Nitika Bist
Posted by Nitika Bist
Hyderabad, Bengaluru (Bangalore)
7 - 10 yrs
₹12L - ₹18L / yr
Big Data
Data engineering
Apache Hive
Apache Spark
Hadoop
+4 more
Roles & Responsibilities:
  • Total Experience of 7-10 years and should be interested in teaching and research
  • 3+ years’ experience in data engineering which includes data ingestion, preparation, provisioning, automated testing, and quality checks.
  • 3+ Hands-on experience in Big Data cloud platforms like AWS and GCP, Data Lakes and Data Warehouses
  • 3+ years of Big Data and Analytics Technologies. Experience in SQL, writing code in spark engine using python, scala or java Language. Experience in Spark, Scala
  • Experience in designing, building, and maintaining ETL systems
  • Experience in data pipeline and workflow management tools like Airflow
  • Application Development background along with knowledge of Analytics libraries, opensource Natural Language Processing, statistical and big data computing libraries
  • Familiarity with Visualization and Reporting Tools like Tableau, Kibana.
  • Should be good at storytelling in Technology
Please note that candidates should be interested in teaching and research work.

Qualification: B.Tech / BE / M.Sc / MBA / B.Sc, Having Certifications in Big Data Technologies and Cloud platforms like AWS, Azure and GCP will be preferred
Primary Skills: Big Data + Python + Spark + Hive + Cloud Computing
Secondary Skills: NoSQL+ SQL + ETL + Scala + Tableau
Selection Process: 1 Hackathon, 1 Technical round and 1 HR round
Benefit: Free of cost training on Data Science from top notch professors
Read more
NCR (Delhi | Gurgaon | Noida)
3 - 7 yrs
₹12L - ₹34L / yr
skill iconMachine Learning (ML)
Data Structures
Data engineering
Big Data
Neural networks
• Experience with Big Data, Neural network (deep learning), and reinforcement learning • Ability to design machine learning systems • Research and implement appropriate ML algorithms and tools • Develop machine learning applications according to requirements • Select appropriate datasets and data representation methods • Run machine learning tests and experiments • Perform statistical analysis and fine-tuning using test results • Extend existing ML libraries and frameworks • Keep abreast of developments in the field • Understanding of data structures, data modeling and software architecture • Deep knowledge of math, probability, statistics and algorithms • Ability to write robust code in Python, Java and R Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Read more
Tradeindia.com - Infocom Network
NCR (Delhi | Gurgaon | Noida)
2 - 5 yrs
₹4L - ₹10L / yr
skill iconMachine Learning (ML)
Data engineering
skill iconData Science
Big Data
Natural Language Processing (NLP)
+1 more
Tradeindia is looking for Machine Learning Engineers to join its engineering team. The ideal candidate will have industry experience working on a recommendation engine, a range of classification and optimization problems e.g.: Collaborative filtering/recommendation User journey prediction Search ranking Text/ sentiment classification or spam detection Predictive analytics The position will involve taking Algorithm and Programming skills and applying them to some of the most exciting and massive SMEs user data. Qualification: M.tech, B.tech, M.C.A. Location: New Delhi About the company: Launched in the year 1996 to offer the Indian Business community a platform to promote themselves globally tradeindia.com has created a niche as India's largest B2B marketplace, offering comprehensive business solutions to the Domestic and Global Business Community through its wide array of online services, directory services and facilitation of trade promotional events. Our portal is an ideal forum for buyers and sellers across the globe to interact and conduct business smoothly and effectively. With an unmatched expertize in data acquisition and online promotion, Tradeindia subsumes a huge number of company profiles and product catalogs under 2,256 different product categories and sub-categories. It is well promoted on all major search engines and receives an average of 20.5 million hits per month. Tradeindia is maintained and promoted by INFOCOM NETWORK LTD. Today we have reached a database of 47,66,674 registered users (Jan 2019), and the company is growing on a titanic scale with a considerable amount of new users joining/registering everyday, under the innovative vision and guidance of Mr. Bikky Khosla, CEO.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort