Loading...

{{notif_text}}

Work at top Indian companies and global startups in 2020 - Check it out

Big data Jobs in Pune

Explore top Big data Job opportunities in Pune for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineer
Data Engineer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 900000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

We are looking for a smart data engineer who has production level experience working on Big Data solutions and handling large volume of data. In specific, the following Big Data skills will help - Production experience with Avro Production experience with Kafka, Kafka Connect, and Confluent Schema Registry (managing Avro over Kafka) Experience with snowflake data warehouse (not mandatory but very, very nice to have) Ideal candidate would have both Python and Scala experience Experience with Spark a plus Experience with any of: databricks, EMR, Hudi would be good to have but not mandatory Some other non-negotiable requirements are - Good academics Excellent communication skills Ready and immediately available to join is preferred. Remote working is not an issue anymore. About Tech Prescient - We are a product development and technology service company working with customers to build awesome products. We work with customers to design and develop their product stack and hence, quality of work we produce is always premium. We are looking for equally motivated people to join our vibrant team and am sure we will make it a win-win situation.

Job posted by
apply for job
apply for job
Amit Arole picture
Amit Arole
Job posted by
Amit Arole picture
Amit Arole
Apply for job
apply for job

Engineering Manager
Engineering Manager

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune
Experience icon
7 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 2400000, duration: "undefined", currency: "INR", equity: false})}}

We are looking for a Technical Ninja / Technical Lead to work with our multiple customers in the capacity of a technical authority. We work with technology companies to build their products. So someone hands-on with technology and has a broader understanding of various technology stacks is a must.   As a Technical Lead, following will be your responsibilities for this role -  Hands-on full stack experience Engage in leading various teams, guiding them and code review to build world class products with high code quality standards Customer interfacing for technical discussions Sound knowledge of various technical stacks and practical experience of hands-on implementation Proposal writing and helping in Sales conversations and working directly with the CEO on this front   Some other non-negotiable requirements are - Good academics Excellent communication skills Ready and immediately available to join is preferred. Remote working is not an issue anymore. About Tech Prescient - We are a product development and technology service company working with customers to build awesome products. We work with customers to design and develop their product stack and hence, quality of work we produce is always premium. We are looking for equally motivated people to join our vibrant team and am sure we will make it a win-win situation.

Job posted by
apply for job
apply for job
Amit Arole picture
Amit Arole
Job posted by
Amit Arole picture
Amit Arole
Apply for job
apply for job

Data Analyst - Python/Linux
Data Analyst - Python/Linux

via Taliun
Founded 2019
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Job Description :- 2-5 years- experience with Python- Experience with building tools to support data pipeline workflow development- Keen understanding of SQL and distributed data processing- Experience with AWS and Cassandra- Able to work independently and drive results- A passion for learning and working effectively in a nimble environment- Experience developing with developing backend API- Experience writing automated unit and integration test cases- Experience with source control tools such as Bit buckets or GIT- Experience with IDEs and defect tracking tools- Development experience on Linux and Scripting

Job posted by
apply for job
apply for job
Pankaj G picture
Pankaj G
Job posted by
Pankaj G picture
Pankaj G
Apply for job
apply for job

Big Data Architect
Big Data Architect

Founded 2002
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Ahmedabad
Experience icon
7 - 20 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

KCSIT Global is CMMI Level 3, ISO 27001 certified a cloud and data solutions company. It has international presence at US, UK, South Africa along with Development center in India in Ahmedabad & Pune. KCS partnering with Microsoft Gold partner, Google cloud partner, Amazon cloud partner as well as other OEMs   We are urgently Hiring for Big Data Architect at Viman Nagar, Pune and Ahmedabad, Gujarat for our company. Kindly send your updated profile if you’re interested.   Job Type- permanent     Company Website- https://www.kcsitglobal.com/   Job Purpose: Looking for a Big Data Architect to design, implement, maintain big data platform on cloud and oversee process to ensure the secure data pipeline is implemented Overseeing development and implementation of data ingestion and processing Ensure data is provided in easily consumable form to business partners Providing technical leadership and support to provide security to the data   Key Accountabilities: Understand company needs to define platform specifications Plan and design the architecture of the data platform Partner with business partners to understand the need for data and form in which data required Evaluate and select appropriate software or hardware and suggest integration methods Oversee assigned programs (e.g. conduct code review) and provide guidance to team members Assist with solving technical problems when they arise Ensure the implementation of agreed architecture and infrastructure Address technical concerns, ideas and suggestions Monitor systems to ensure they meet both, user needs and business goals   Skills & Competencies: Proven experience as a Big Data Architect Strong Background in Big Data Infrastructure, Engineering and Development and working with Big Data and Hadoop File System Hands-on Experience with Hadoop Eco system (HDFS, SQOOP, Hive, PIG, Spark, Scala) Successful background as an architect on EDW/Data Lake projects preferred. Understanding of strategic IT solutions Experience in building Cloud native, container-based solutions If you’re interested kindly share below detail with updated resume.     If you are interested for this position please share your updated word resume with following details. Full Name: Total Big Data Experience: Total Data Architect Exp:- Current CTC: Expected CTC: Notice Period: Current Location:

Job posted by
apply for job
apply for job
Amit Sali picture
Amit Sali
Job posted by
Amit Sali picture
Amit Sali
Apply for job
apply for job

Senior Software Engineer - Backend
Senior Software Engineer - Backend

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1700000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities Ensure timely and top-quality product delivery Ensure that the end product is fully and correctly defined and documented Ensure implementation/continuous improvement of formal processes to support product development activities Drive the architecture/design decisions needed to achieve cost-effective and high-performance results Conduct feasibility analysis, produce functional and design specifications of proposed new features. · Provide helpful and productive code reviews for peers and junior members of the team. Troubleshoot complex issues discovered in-house as well as in customer environments. Qualifications · Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc. · Expertise in Java, Object Oriented Programming, Design Patterns · Experience in coding and implementing scalable solutions in a large-scale distributed environment · Working experience in a Linux/UNIX environment is good to have · Experience with relational databases and database concepts, preferably MySQL · Experience with SQL and Java optimization for real-time systems · Familiarity with version control systems Git and build tools like Maven · Excellent interpersonal, written, and verbal communication skills · BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent

Job posted by
apply for job
apply for job
Sourabh Gandhe picture
Sourabh Gandhe
Job posted by
Sourabh Gandhe picture
Sourabh Gandhe
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 1993
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Anywhere
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 3200000, duration: "undefined", currency: "INR", equity: false})}}

Data Engineering role at ThoughtWorks   ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success.   You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices   Ideally, you should have -  2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential   And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities.   If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!

Job posted by
apply for job
apply for job
Suresh Teegireddy picture
Suresh Teegireddy
Job posted by
Suresh Teegireddy picture
Suresh Teegireddy
Apply for job
apply for job

Data Scientist
Data Scientist

Founded 2019
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

The selected would be a part of the inhouse Data Labs team. He/she would be responsible to creation insights-driven decision structure.This will include:ScorecardsStrategiesMISThe verticals included are:RiskMarketingProduct

Job posted by
apply for job
apply for job
Ankit Goenka picture
Ankit Goenka
Job posted by
Ankit Goenka picture
Ankit Goenka
Apply for job
apply for job

Principal Architect – Big Data Security Architecture
Principal Architect – Big Data Security Architecture

Founded 1998
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Dubai, Anywhere
Experience icon
12 - 18 years
Salary icon
Best in industry{{renderSalaryString({min: 5000000, max: 7000000, duration: "undefined", currency: "INR", equity: false})}}

• Design secure solutions in line with the business strategy and security requirements • Contribute to the enterprise security architecture through developing Strategies, Reference Architectures, Roadmaps, Architectural Principles, Technology Standards, Security Non-Functional Requirements, Architectural Decisions and Design Patterns. • Deliver cyber security architectural artifacts such as High Level Designs and Solution Blueprints. • Ensure the enforcement of security requirements in solution architecture • Contribute to educating other architects and engineering teams in designing and implementing secure solutions Technologies The candidate should have knowledge and experience in designing and implementing the following technologies and related domains • Cloud security • Identity and Access Management • Encryption, Masking and Key Management • Data Classification, Data Privacy and Data Leakage Prevention • Infrastructure security (Network/Servers/Virtualization) • Application Security • Endpoint Security • SIEM and Log Management • Forward and Reverse Proxy • Big Data Security • IoT Security • SAP Security (Preferred) Architecture Skills • Solid experience in developing security solution architecture • Solid experience and knowledge in TOGAF and SABSA or other Enterprise Architecture frameworks. • Strong experience in developing architectural artifacts including reference architectures, roadmaps, architectural principles, technology standards, security non-functional requirements, architectural decisions and design patterns • Strong experience in documenting existing, transition and target architectures. Cyber Security Skills • Solid experience in performing security risk assessments and controls implementation • Strong experience in designing and implementing security controls by utilizing the technologies mentioned in the technologies section above • Strong knowledge in offensive and defensive aspects of cybesecurity with solid understanding of attack techniques and abuse cases. • Strong knowledge and implementation experience of cyber security standards, frameworks and regulations such as ISO27001, NIST CSF, CSA CCM, PCI-DSS, GDPR

Job posted by
apply for job
apply for job
Apoorva Chauhan picture
Apoorva Chauhan
Job posted by
Apoorva Chauhan picture
Apoorva Chauhan
Apply for job
apply for job

Ruby on Rails Developer
Ruby on Rails Developer

Founded 2007
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Navi Mumbai, Pune
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 900000, duration: "undefined", currency: "INR", equity: false})}}

Dear Candidate, Please find below details : Ruby on Rails Developer Years of experience- 3 to 6 years Required Skills Ruby, Ruby on Rails, Experience in developing Web application using Ruby, RoR Databases: PostgreSQL Added advantages if candidates knows REST OS: Linux Please share your details across anshuman.baghel@niyuj.com with below details  Total Exp: Rel Exp: Current CTC: Expected CTC: Notice Period: Niyuj is a product engineering company that engages with the customer at different levels in the product development lifecycle in order to build quality products, on budget and on time. Founded in 2007 by passionate technology leader, Stable and seasoned leadership with hands-on experience working or consulting companies from bootstrapped start-ups to large multinationals. Global experience in US, Australia & India, Worked with fortune 500 companies to prominent startups, clients include Symantec, Vmware, Carbonite, Edgewater networks Domain Areas we work for : CLOUD SERVICES - Enterprises are rushing to incorporate cloud computing, big data, and mobile into their IT infrastructures. BIG-DATA ANALYTICS - Revolutionizing the way Fortune 1000 companies harness billions of data and turn it into a competitive advantage. NETWORK AND SECURITY - Network and security-related system level work that meets customer demands and deliver real value Our Prime customer, Carbonite, is Americas #1 cloud backup and Storage Company, with over 1.5 million customers and headquartered in Boston MA, with offices in 15 locations across the world. Your potential for exponential growth: Your experience and expertise would be a great addition to our team, and you will have an opportunity to work closely with industry leaders, literally sitting across the table and jointly building the future with folks who are noted gurus and industry veterans from prestigious institutions like IIT's and top US universities with industry experience in fortune 500 companies like EMC, Symantec and VERITAS.

Job posted by
apply for job
apply for job
Anshuman Baghel picture
Anshuman Baghel
Job posted by
Anshuman Baghel picture
Anshuman Baghel
Apply for job
apply for job

Big Data Engineer
Big Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Noida, Pune, NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

Job Description: The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities.   Who are you? Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions You use cloud for more than just provisioning a Virtual Machine Vim is friendly to you and you know how to exit Nano You check logs before screaming about an error You are a solid engineer who writes modular code and commits in GIT You are a doer who doesn’t say “no” without first understanding You understand the value of documentation of your work You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines Eligibility   At least 2 years of Data Engineering Experience Have interacted with Customers Must Have Skills: Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES Apache Spark Python Scala PostgreSQL Git Linux Good to have Skills: Apache NiFi Apache Kafka Apache Hive Docker Amazon Certification

Job posted by
apply for job
apply for job
Sneha Pandey picture
Sneha Pandey
Job posted by
Sneha Pandey picture
Sneha Pandey
Apply for job
apply for job

Data Scientist
Data Scientist

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 100000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

Description Must have Direct Hands- on, 4 years of experience, building complex Data Science solutions Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling, using Python / R Experience should include the following, File I/ O, Data Harmonization, Data Exploration Machine Learning Techniques (Supervised, Unsupervised) Multi- Dimensional Array Processing Deep Learning NLP, Image Processing Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have excellent written and verbal communication skills Should be a team player with open mind

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Bigdata Lead
Bigdata Lead

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 100000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job