Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more
The job you are looking for has expired or has been deleted. Check out similar jobs below.

Similar jobs

Data Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
1 - 5 years
Experience icon
7 - 12 lacs/annum

JOB DESCRIPTION: We are looking for a Data Engineer with a solid background in scalable systems to work with our engineering team to improve and optimize our platform. You will have significant input into the team’s architectural approach and execution. We are looking for a hands-on programmer who enjoys designing and optimizing data pipelines for large-scale data. This is NOT a "data scientist" role, so please don't apply if you're looking for that. RESPONSIBILITIES: 1. Build, maintain and test, performant, scalable data pipelines 2. Work with data scientists and application developers to implement scalable pipelines for data ingest, processing, machine learning and visualization 3. Building interfaces for ingest across various data stores MUST-HAVE: 1. A track record of building and deploying data pipelines as a part of work or side projects 2. Ability to work with RDBMS, MySQL or Postgres 3. Ability to deploy over cloud infrastructure, at least AWS 4. Demonstrated ability and hunger to learn GOOD-TO-HAVE: 1. Computer Science degree 2. Expertise in at least one of: Python, Java, Scala 3. Expertise and experience in deploying solutions based on Spark and Kafka 4. Knowledge of container systems like Docker or Kubernetes 5. Experience with NoSQL / graph databases: 6. Knowledge of Machine Learning Kindly apply only if you are skilled in building data pipelines.

Job posted by
apply for job
apply for job
Zeimona Dsouza picture
Zeimona Dsouza
Job posted by
Zeimona Dsouza picture
Zeimona Dsouza
Apply for job
apply for job

Software Developer

via IQVIA
Founded 1969
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Kochi (Cochin)
Experience icon
2 - 7 years
Experience icon
5 - 25 lacs/annum

Job Skill Requirements: • 4+ years of experience building and managing complex products/solutions • 2+ experience in DW/ELT/ETL technologies-Nice to have • 3+ years of hands on development experience using Big Data Technologies like: Hadoop, SPARK • 3+ years of hands on development experience using Big Data eco system components like: Hive, Impala,HBase, Sqoop, Oozie etc… • Proficient level programming in Scala. • Good to have hands on experience building webservices in Python/Scala stack. • Good to have experience developing Restful web services • Knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)

Job posted by
apply for job
apply for job
Ambili Sasidharan picture
Ambili Sasidharan
Job posted by
Ambili Sasidharan picture
Ambili Sasidharan
Apply for job
apply for job

Software Engineer

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 11 years
Experience icon
8 - 22 lacs/annum

Job Description : We are looking for someone who can work with the platform or analytics vertical to extend and scale our product-line. Every product line has the dependency on other products within LimeTray eco-system and SSE- 2 is expected to collaborate with different internal teams to own stable and scalable releases. While every product-line has their own tech stack - different products have different technologies and it's expected that Person is comfortable working across all of them as and when needed. Some of the technologies/frameworks that we work on - Microservices, Java, Node, MySQL, MongoDB, Angular, React, Kubernetes, AWS, Python Requirements : - Minimum 3-year work experience in building, managing and maintaining Python based backend applications - B.Tech/BE in CS from Tier 1/2 Institutes - Strong Fundamentals of Data Structures and Algorithms - Experience in Python & Design Patterns - Expert in git, unit tests, technical documentation and other development best practises - Worked with SQL & NoSQL databases (Cassandra, MYSQL) - Understanding of async programming. Knowledge in handling messaging services like pubsub or streaming (Eg: Kafka, ActiveMQ, RabbitMQ) - Understanding of Algorithm, Data structures & Server Management - Understanding microservice or distributed architecture - Delivered high-quality work with a significant contribution - Experience in Handling small teams - Has good debugging skills - Has good analytical & problem-solving skills What we are looking for : - Ownership Driven - Owns end to end development - Team Player - Works well in a team. Collaborates with & outside the team. - Communication - Speaks and writes clearly and articulately. Maintains this standard in all forms of written communication including email. - Proactive & Persistence - Acts without being told to and demonstrates a willingness to go the distance to get something done - Develops emotional bonding for the product and does what is good for the product. - Customer first mentality. Understands customers pain and works towards the solutions. - Honest & always keeps high standards. - Expects the same form the team - Strict on Quality and Stability of the product.

Job posted by
apply for job
apply for job
Bhavya Jain picture
Bhavya Jain
Job posted by
Bhavya Jain picture
Bhavya Jain
Apply for job
apply for job

Senior Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
4 - 12 lacs/annum

Sr Data Engineer Job Description About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Requirements: - Building an intelligent and highly scalable crawling platform - Data extraction and processing at scale - Enhancing existing data stores/data models - Building a low latency API layer for serving data to power Dashboards, Reports, and Analytics functionality - Constantly evolving our data platform to support new features Expectations: - 4+ years of relevant industry experience. - Strong in algorithms and problem solving Skills - Software development experience in one or more general purpose programming languages (e.g. Python, C/C++, Ruby, Java, C#). - Exceptional coding abilities and experience with building large-scale and high-availability applications. - Experience in search/information retrieval platforms like Solr, Lucene and ElasticSearch. - Experience in building and maintaining large scale web crawlers. - In Depth knowledge of SQL and and No-Sql datastore. - Ability to design and build quick prototypes. - Experience in working on cloud based infrastructure like AWS, GCE. Growth at DataWeave - Fast paced growth opportunities at dynamically evolving start-up. - You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you.

Job posted by
apply for job
apply for job
Sandeep Sreenath picture
Sandeep Sreenath
Job posted by
Sandeep Sreenath picture
Sandeep Sreenath
Apply for job
apply for job

Big Data Developer

Founded 2011
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 5 years
Experience icon
1 - 6 lacs/annum

• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.

Job posted by
apply for job
apply for job
John Richardson picture
John Richardson
Job posted by
John Richardson picture
John Richardson
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Big Data Engineer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
2 - 7 years
Experience icon
4 - 20 lacs/annum

As a Big Data Engineer, you will build utilities that would help orchestrate migration of massive Hadoop/Big Data systems onto public cloud systems. You would build data processing scripts and pipelines that serve several of jobs and queries per day. The services you build will integrate directly with cloud services, opening the door to new and cutting-edge re-usable solutions. You will work with engineering teams, co-workers, and customers to gain new insights and dream of new possibilities. The Big Data Engineering team is hiring in the following areas: • Distributed storage and compute solutions • Data ingestion, consolidation, and warehousing • Cloud migrations and replication pipelines • Hybrid on-premise and in-cloud Big Data solutions • Big Data, Hadoop and spark processing Basic Requirements: • 2+ years’ experience of Hands-on in data structures, distributed systems, Hadoop and spark, SQL and NoSQL Databases • Strong software development skills in at least one of: Java, C/C++, Python or Scala. • Experience building and deploying cloud-based solutions at scale. • Experience in developing Big Data solutions (migration, storage, processing) • BS, MS or PhD degree in Computer Science or Engineering, and 5+ years of relevant work experience in Big Data and cloud systems. • Experience building and supporting large-scale systems in a production environment. Technology Stack: Cloud Platforms – AWS, GCP or Azure Big Data Distributions – Any of Apache Hadoop/CDH/HDP/EMR/Google DataProc/HD-Insights Distributed processing Frameworks – One or more of MapReduce, Apache Spark, Apache Storm, Apache Flink. Database/warehouse – Hive, HBase, and at least one cloud-native services Orchestration Frameworks – Any of Airflow, Oozie, Apache NiFi, Google DataFlow Message/Event Solutions – Any of Kafka, Kinesis, Cloud pub-sub Container Orchestration (Good to have)– Kubernetes or Swarm

Job posted by
apply for job
apply for job
Anwar Shaikh picture
Anwar Shaikh
Job posted by
Anwar Shaikh picture
Anwar Shaikh
Apply for job
apply for job

Senior Software Engineer

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 6 years
Experience icon
15 - 18 lacs/annum

Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams

Job posted by
apply for job
apply for job
tanika monga picture
tanika monga
Job posted by
tanika monga picture
tanika monga
Apply for job
apply for job

Big Data Administrator

Founded 1989
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 8 years
Experience icon
14 - 22 lacs/annum

As a Big Data Administrator, you’ll be responsible for the administration and governance of a complex analytics platform that is already changing the way large industrial companies manage their assets. A Big Data Administrator understands cutting-edge tools and frameworks, and is able to determine what the best tools are for any given task. You will enable and work with our other developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and mapping, and machine learning, to name a few. We also strongly encourage everyone to tinker with existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our existing systems don’t stagnate or deteriorate. Responsibilities: As a Big Data Engineer, your responsibilities may include, but are not limited to, the following: ● Build a scalable Big Data Platform designed to serve many different use-cases and requirements ● Build a highly scalable framework for ingesting, transforming and enhancing data at web scale ● Develop data structures and processes using components of the Hadoop ecosystem such as Avro, Hive, Parquet, Impala, Hbase, Kudu, Tez, etc. ● Establish automated build and deployment pipelines ● Implement machine learning models that enable customers to glean hidden insights about their data ● Implementing security and integrating with components such as LDAP, AD, Sentry, Kerberos. ● Strong understanding of row level and role based security concepts such as inheritance ● Establishing scalability benchmarks for predictable scalability thresholds. Qualifications: ● Bachelor's degree in Computer Science or related field ● 6+ years of system building experience ● 4+ years of programming experience using JVM based languages ● A passion for DevOps and an appreciation for continuous integration/deployment ● A passion for QA and an understanding that testing is not someone else’s responsibility ● Experience automating infrastructure and build processes ● Outstanding programming and problem solving skills ● Strong passion for technology and building great systems ● Excellent communication skills and ability to work using Agile methodologies ● Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment ● Experience with service-oriented (SOA) and event-driven (EDA) architectures ● Experience using big data solutions in an AWS environment ● Experience with noSQL data stores: Cassandra, HDFS and/or Elasticsearch ● Experience with javascript or associated frameworks Preferred skills: We value these qualities, but they’re not required for this role: ● Masters or Ph.D. in related field ● Experience as an open source contributor ● Experience with Akka, stream processing technologies and concurrency frameworks ● Experience with Data modeling ● Experience with Chef, Puppet, Ansible, Salt or equivalent ● Experience with Docker, Mesos and Marathon ● Experience with distributed messaging services, preferably Kafka ● Experience with distributed data processors, preferably Spark ● Experience with Angular, React, Redux, Immutable.js, Rx.js, Node.js or equivalent ● Experience with Reactive and/or Functional programming ● Understanding of Thrift, Avro or protocol buffers

Job posted by
apply for job
apply for job
Arjun Ravindran picture
Arjun Ravindran
Job posted by
Arjun Ravindran picture
Arjun Ravindran
Apply for job
apply for job

Technical Lead - Big Data and Java

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 7 years
Experience icon
1 - 20 lacs/annum

Description Does solving complex business problems and real world challenges interest you Do you enjoy seeing the impact your contributions make on a daily basis Are you passionate about using data analytics to provide game changing solutions to the Global 2000 clients Do you thrive in a dynamic work environment that constantly pushes you to be the best you can be and more Are you ready to work with smart colleagues who drive for excellence in everything they do If you possess a solutions mindset, strong analytical skills, and commitment to be part of a tremendous journey, come join our growing, global team. See what Saama can do for your career and for your journey. Position: Java/ Big Data Lead (2162) Location: Hinjewadi Phase 1, Pune Type: Permanent Full time Requirements: Candidate should be able - Define application level architecture and guide low level of Database design Gather technical requirements and propose solutions based on client s business and architectural needs Interact with prospective customers during product demos/ evaluations Internally work with technology and business groups to define project specifications Showcase experience on cloud based implementations and technically manage Bigdata and j2EE projects Showcase experience hands-on programming and debugging skills on Spring, Hibernate, Java, JavaScript, JSP/ Servlet, J2EE design patterns / Python Have knowledge on service Integration Concepts (especially with RESTFUL services/ SOAP based web services) Design and develop solutions for Non-Functional Requirements (Performance analysis & tuning, Benchmarking/ load testing, Security) Impact on the business: Plays an important role in making Saama s Solutions game changers for our strategic partners by using data science to solve core, complex business challenges. Key relationships: Sales & pre-sales Product management Engineering Client organization: account management & delivery Saama Competencies: INTEGRITY: we do the right things. INNOVATION: we change the game. TRANSPARENCY: we communicate openly COLLABORATION: we work as one team PROBLEM-SOLVING: we solve core, complex business challenges ENJOY & CELEBRATE: we have fun Competencies: Self-starter who gets results with minimal support and direction in a fast-paced environment. Takes initiative; challenges the status quo to drive change. Learns quickly; takes smart risks to experiment and learn. Works well with others; builds trust and maintains credibility. Planful: identifies and confirms key requirements in dynamic environments; anticipates tasks and contingencies. Communicates effectively; productive communication with clients and all key stakeholders communication in both verbal and written communication. Stays the course despite challenges & setbacks. Works well under pressure. Strong analytical skills; able to apply inductive and deductive thinking to generate solutions for complex problems

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.