Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Locations

Bengaluru (Bangalore)

Experience

5 - 10 years

Salary

INR 10L - 20L

Skills

ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL

Job description

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

About the company

The group currently operates Grand Shopping Malls, Grand Hypermarkets and Grand Xpress in Middle East and India.

Founded

2017

Type

Products & Services

Size

6-50 employees

Stage

Raised funding
View company

Similar jobs

Backend Developer

Founded 2012
Product
51-250 employees
Raised funding
C/C++
Java
.NET
PostgreSQL
MySQL
Go Programming (Golang)
Amazon Web Services (AWS)
Location icon
Pune
Experience icon
5 - 10 years

clypd is a leading player in one of the hottest emerging markets of programmatic TV, disrupting the $74 billion television industry. We are on a mission to change television advertising, to make it more efficient and effective for TV media companies, consumers, and advertisers. We’re looking for someone who has experience building backend systems with an interest in data infrastructures and have strong understanding of computer science and database fundamentals. More details - http://clypd.theresumator.com/apply/DyJEzS/Senior-Software-Engineer-Pune-India

Job posted by
message
Job poster profile picture - Sumit Shah
Sumit Shah
Job posted by
Job poster profile picture - Sumit Shah
Sumit Shah
message now

Senior Product Developer

Founded 2011
Services
6-50 employees
Raised funding
Scala
Java
Relational Database (RDBMS)
Apache
Spark
HDFS
Apache Flume
Apache Kafka
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 4 years

Our client is a Gurgaon based software startup in Artificial Intelligence, Big Data and Data Science domain. Our clients, have built a data scientist, a virtual one. It's an Artificial Intelligence powered agent who can learn & work 24x7 to deliver business insights that matter the most. ● Working on a Unique concept ● Recognized by Indian Angel Network (IAN), the biggest network in India along with DIPP (Govt. Of India) and NASSCOM. ● Winner of $120K credits as part of Microsoft Bizspark Plus program ● Raised two professional rounds of funding ● Alumni of premier institutes (like IIT Bombay, IIT Delhi) on our advisory panel ● The current hiring is for the core team expansion. It will be under 10. I.e. candidate will be part of the core founding team and will get tremendous exposure. Core founding team focuses on invention and gets huge opportunities to file patents. ● Days: Monday to Saturdays ● One weekday off per month as per employee's choice. This is on top of earned/privilege leaves and bank holidays. ● Location: Gurgaon ● Line Management: Directly reporting to CxO Team Position: - Sr. Product Developer Job Description: Sr. Product Developer will be part of the client’s Lab. As a Sr. Product Developer, the candidate will be working very closely with Product Management, AI Research and Data Scientist team. Key responsibilities include, ● Design and Development of product in Big Data architecture and framework - Apache Spark, HDFS, Flume, Kafka with Scala/Java ● Development of Machine Learning Algorithms in Apache Spark framework ● Design and Development of integration connectors with external data sources like RDBMS (MySQL, Oracle etc.) and other products ● Lead, mentor and coach team members Skills ● 4+ Years of product development experience in Scala / Java ● Must possess in-depth knowledge of core design/architectural concerns like design patterns, performance, code re-usability and quality ● Should have good understanding of RDBMS and EA diagrams ● Experience on development (or upgrades) of Apache Spark libraries and contributions to Apache Spark or other open source frameworks is an added advantage  ● Understanding on data security or statistics (like probability distribution etc.) is an added advantage ● Ability to take initiatives, self-motivated and a learning attitude is must. Experience & Qualification Required - B.Tech from Tier 1.5 (NIT/IIIT/DCE) with 4+ years of Experience

Job posted by
message
Job poster profile picture - Suchit Aggarwal
Suchit Aggarwal
Job posted by
Job poster profile picture - Suchit Aggarwal
Suchit Aggarwal
message now

Big Data Evangelist

Founded 2016
Products and services
6-50 employees
Profitable
Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra
Location icon
Noida
Experience icon
2 - 6 years

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
message
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
Job posted by
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
message now

Big Data Architect

Founded
employees
Spark
HDFS
Cassandra
MongoDB
Apache Storm
Apache Hive
Apache Kafka
Apache HBase
Location icon
Anywhere
Experience icon
5 - 11 years

Sigmoid is a fast growing Product Based BIG DATA startup. Sequoia Funded & Backed by experienced Professionals & Advisors. Sigmoid is revolutionizing business intelligence and analytics by providing unified tools for historical and real time analysis on Apache Spark. With their suite of products, Sigmoid is democratizing streaming use-cases like RTB Data Analytics, Log Analytics, Fraud Detection, Sensor Data Analytics etc. Sigmoid can enable the customers’ engineering team to set up their infrastructure on Spark and ramp up their development timelines, or enable the analytics team to derive insights from their data. Sigmoid has created a real time exploratory analytics tool using on Apache SPARK which not only vastly improves performance but also reduces the cost. A user can quickly analyse huge volumes of data, filter through multiple dimensions, compare results across time periods and carry out root cause analysis in a matter of seconds. Leading organisations across industry verticals are currently using Sigmoid’s platform in production to create success stories. ------------------------------------ What Sigmoid offers you: Work in a well-funded (Sequoia Capital) Big Data company. Deal with Terabytes of data on a regular basis. Opportunity to contribute to top big data projects. Work on complex problems faced by leading global companies in multiple areas such as fraud detection, real-time analytics, pricing modeling and so on ------------------------------------------------------ We are looking for Someone who has: 6+ years of demonstrable experience designing technological solutions to complex data problems, developing efficient and scalable code. Experience in Design, Architecture, Development of Big Data Technologies. Provides Technical leadership in Big Data space (Apache Spark, Kafka, Flink, Hadoop, MapReduce, HDFS, Hive, HBase, Flume, Sqoop, NoSQL, Cassandra, HBase) Strong understanding of databases and SQL. Defines and Drives best practices in Big Data stack. Drives operational excellence through root cause analysis and continuous improvement for Big Data technologies and processes. Operating knowledge of cloud computing platforms (AWS and/ or Azure or Google Cloud). Mentors/coaches engineers to facilitate their development and provide technical leadership to them A technologist who Loves to code and design and have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners. ------------------------------------ Preferred Qualifications: Engineering Bachelors/Masters in Computer Science/IT. Top Tier Colleges (IIT, NIT, IIIT, etc) will be preferred. Salary is not a constraint for the right talent.

Job posted by
message
Job poster profile picture - Karthik Selvaraj
Karthik Selvaraj
Job posted by
Job poster profile picture - Karthik Selvaraj
Karthik Selvaraj
message now

Full Stack Developer Python

Founded 2012
Products and services
51-250 employees
Raised funding
Python
PostgreSQL
Linux/Unix
HTML/CSS
Javascript
Amazon Web Services (AWS)
Google Cloud Storage
Relational Database (RDBMS)
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years

Full Stack Developer (Python): Experience: 4-7 Years We’re looking for an experienced developer who can help us build and continue to advance the software platform developed for collecting, sharing, and analyzing data in the most difficult settings on earth. The platform is designed to be accessible to users with little to no technical knowledge, and we need someone who can work with us to create and code a user experience that’s simple, reliable, and fast. The project has both a web portal and a cross-platform, online/offline mobile app. You’ll be working across our technology from our web and mobile user interfaces through to our data storage. Required Skills:  Around 4 to 5 years of development experience, with at least 2 year or more being in Python and Postgres.  Strong experience in Linux based operating systems  Good experience in writing HTML and JavaScript.  Strong working experience with relational databases.  Knowledge or experience in SQLAlchemy and or Django is a plus.  Should have strong communication skills, a passion to learn, and an ability to work well with people at all levels of an organization.  Working experience in AWS and / or Google Cloud is added plus. Interested candidates can send their updated resume to Thouseef.Ahmed@suventure.in

Job posted by
message
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
Job posted by
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
message now

Data Scientist

Founded 2013
Product
6-50 employees
Raised funding
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Mumbai
Experience icon
3 - 7 years

Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.

Job posted by
message
Job poster profile picture - Julie K
Julie K
Job posted by
Job poster profile picture - Julie K
Julie K
message now

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
message
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani
message now

python developer

Founded 2016
Product
6-50 employees
Bootstrapped
Python
Odoo (OpenERP)
PostgreSQL
github
RESTful APIs
Location icon
Chennai
Experience icon
3 - 7 years

Responsibilities: • Analyze customer needs, design and build solutions with Odoo • Consistently create quality software that meets specific design and requirements on stated timelines by writing, reviewing and documenting application code • Enhance applications, develop and configure features and functionalities that support business requirements • Write and support extended API and interfaces with other key business systems • Influence the technical direction of the project as well as hosting architecture • Be involved in the Odoo project and community • Support Odoo reports

Job posted by
message
Job poster profile picture - Think42Labs HR
Think42Labs HR
Job posted by
Job poster profile picture - Think42Labs HR
Think42Labs HR
message now

Data Scientist

Founded 2015
Services
6-50 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Hyderabad
Experience icon
6 - 10 years

It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.

Job posted by
message
Job poster profile picture - Sangita Deka
Sangita Deka
Job posted by
Job poster profile picture - Sangita Deka
Sangita Deka
message now

Data Scientist

Founded 2011
Product
250+ employees
Raised funding
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 4 years

Position:-Data Scientist Location :- Gurgaon Job description Shopclues is looking for talented Data Scientist passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Education : PhD/MS or equivalent in Applied mathematics, statistics, physics, computer science or operations research background. 2+ years experience in a relevant role. Skills · Passion for understanding business problems and trying to address them by leveraging data - characterized by high-volume, high dimensionality from multiple sources · Ability to communicate complex models and analysis in a clear and precise manner · Experience with building predictive statistical, behavioural or other models via supervised and unsupervised machine learning, statistical analysis, and other predictive modeling techniques. · Experience using R, SAS, Matlab or equivalent statistical/data analysis tools. Ability to transfer that knowledge to different tools · Experience with matrices, distributions and probability · Familiarity with at least one scripting language - Python/Ruby · Proficiency with relational databases and SQL Responsibilities · Has worked in a big data environment before alongside a big data engineering team (and data visualization team, data and business analysts) · Translate client's business requirements into a set of analytical models · Perform data analysis (with a representative sample data slice) and build/prototype the model(s) · Provide inputs to the data ingestion/engineering team on input data required by the model, size, format, associations, cleansing required

Job posted by
message
Job poster profile picture - Shrey Srivastava
Shrey Srivastava
Job posted by
Job poster profile picture - Shrey Srivastava
Shrey Srivastava
message now
Why waste time browsing all the jobs?
Just chat with Voila, your professional assistant. Find jobs, compare your salary and much more!
talk to voila