Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more
The job you are looking for has expired or has been deleted. Check out similar jobs below.

Similar jobs

Data Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
1 - 5 years
Experience icon
7 - 12 lacs/annum

JOB DESCRIPTION: We are looking for a Data Engineer with a solid background in scalable systems to work with our engineering team to improve and optimize our platform. You will have significant input into the team’s architectural approach and execution. We are looking for a hands-on programmer who enjoys designing and optimizing data pipelines for large-scale data. This is NOT a "data scientist" role, so please don't apply if you're looking for that. RESPONSIBILITIES: 1. Build, maintain and test, performant, scalable data pipelines 2. Work with data scientists and application developers to implement scalable pipelines for data ingest, processing, machine learning and visualization 3. Building interfaces for ingest across various data stores MUST-HAVE: 1. A track record of building and deploying data pipelines as a part of work or side projects 2. Ability to work with RDBMS, MySQL or Postgres 3. Ability to deploy over cloud infrastructure, at least AWS 4. Demonstrated ability and hunger to learn GOOD-TO-HAVE: 1. Computer Science degree 2. Expertise in at least one of: Python, Java, Scala 3. Expertise and experience in deploying solutions based on Spark and Kafka 4. Knowledge of container systems like Docker or Kubernetes 5. Experience with NoSQL / graph databases: 6. Knowledge of Machine Learning Kindly apply only if you are skilled in building data pipelines.

Job posted by
apply for job
apply for job
Gurudatt Bhobe picture
Gurudatt Bhobe
Job posted by
Gurudatt Bhobe picture
Gurudatt Bhobe
Apply for job
apply for job

Hadoop Developer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
23 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Bigdata Lead

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 5 years
Experience icon
1 - 18 lacs/annum

Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Technical Lead - Big Data and Java

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 7 years
Experience icon
1 - 20 lacs/annum

Description Does solving complex business problems and real world challenges interest you Do you enjoy seeing the impact your contributions make on a daily basis Are you passionate about using data analytics to provide game changing solutions to the Global 2000 clients Do you thrive in a dynamic work environment that constantly pushes you to be the best you can be and more Are you ready to work with smart colleagues who drive for excellence in everything they do If you possess a solutions mindset, strong analytical skills, and commitment to be part of a tremendous journey, come join our growing, global team. See what Saama can do for your career and for your journey. Position: Java/ Big Data Lead (2162) Location: Hinjewadi Phase 1, Pune Type: Permanent Full time Requirements: Candidate should be able - Define application level architecture and guide low level of Database design Gather technical requirements and propose solutions based on client s business and architectural needs Interact with prospective customers during product demos/ evaluations Internally work with technology and business groups to define project specifications Showcase experience on cloud based implementations and technically manage Bigdata and j2EE projects Showcase experience hands-on programming and debugging skills on Spring, Hibernate, Java, JavaScript, JSP/ Servlet, J2EE design patterns / Python Have knowledge on service Integration Concepts (especially with RESTFUL services/ SOAP based web services) Design and develop solutions for Non-Functional Requirements (Performance analysis & tuning, Benchmarking/ load testing, Security) Impact on the business: Plays an important role in making Saama s Solutions game changers for our strategic partners by using data science to solve core, complex business challenges. Key relationships: Sales & pre-sales Product management Engineering Client organization: account management & delivery Saama Competencies: INTEGRITY: we do the right things. INNOVATION: we change the game. TRANSPARENCY: we communicate openly COLLABORATION: we work as one team PROBLEM-SOLVING: we solve core, complex business challenges ENJOY & CELEBRATE: we have fun Competencies: Self-starter who gets results with minimal support and direction in a fast-paced environment. Takes initiative; challenges the status quo to drive change. Learns quickly; takes smart risks to experiment and learn. Works well with others; builds trust and maintains credibility. Planful: identifies and confirms key requirements in dynamic environments; anticipates tasks and contingencies. Communicates effectively; productive communication with clients and all key stakeholders communication in both verbal and written communication. Stays the course despite challenges & setbacks. Works well under pressure. Strong analytical skills; able to apply inductive and deductive thinking to generate solutions for complex problems

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Senior Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
4 - 12 lacs/annum

Sr Data Engineer Job Description About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Requirements: - Building an intelligent and highly scalable crawling platform - Data extraction and processing at scale - Enhancing existing data stores/data models - Building a low latency API layer for serving data to power Dashboards, Reports, and Analytics functionality - Constantly evolving our data platform to support new features Expectations: - 4+ years of relevant industry experience. - Strong in algorithms and problem solving Skills - Software development experience in one or more general purpose programming languages (e.g. Python, C/C++, Ruby, Java, C#). - Exceptional coding abilities and experience with building large-scale and high-availability applications. - Experience in search/information retrieval platforms like Solr, Lucene and ElasticSearch. - Experience in building and maintaining large scale web crawlers. - In Depth knowledge of SQL and and No-Sql datastore. - Ability to design and build quick prototypes. - Experience in working on cloud based infrastructure like AWS, GCE. Growth at DataWeave - Fast paced growth opportunities at dynamically evolving start-up. - You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you.

Job posted by
apply for job
apply for job
Sandeep Sreenath picture
Sandeep Sreenath
Job posted by
Sandeep Sreenath picture
Sandeep Sreenath
Apply for job
apply for job

Python Machine Learning Developer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
3 - 24 lacs/annum

We are building the AI core for a Legal Workflow solution. You will be expected to build and train models to extract relevant information from contracts and other legal documents. Required Skills/Experience: - Python - Basics of Deep Learning - Experience with one ML framework (like TensorFlow, Keras, Caffee) Preferred Skills/Expereince: - Exposure to ML concepts like LSTM, RNN and Conv Nets - Experience with NLP and Stanford POS tagger

Job posted by
apply for job
apply for job
Madhav Bhagat picture
Madhav Bhagat
Job posted by
Madhav Bhagat picture
Madhav Bhagat
Apply for job
apply for job

Senior Software Engineer

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 6 years
Experience icon
15 - 18 lacs/annum

Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams

Job posted by
apply for job
apply for job
tanika monga picture
tanika monga
Job posted by
tanika monga picture
tanika monga
Apply for job
apply for job

Big Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Navi Mumbai, Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 2 years
Experience icon
4 - 10 lacs/annum

Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background

Job posted by
apply for job
apply for job
Sneha Pandey picture
Sneha Pandey
Job posted by
Sneha Pandey picture
Sneha Pandey
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Lead Data Engineer (SDE III)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
25 - 55 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.