Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Hadoop Jobs in Pune

Explore top Hadoop Job opportunities in Pune for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Technical Delivery Managers - Big Data, Cloud, Data Warehousing, BI

Founded 2013
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
8 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 2500000, max: 3500000, duration: "undefined", currency: "INR", equity: false})}}

Job Title/Designation: Technical Manager - Big Data, Data Warehousing, BI Job Description: Location - Pune   Experience - 8 + Years Responsibilities: Responsible to work closely with customer to understand the requirements, discuss and define various use cases Liaise with key stakeholders to define the big data solutions roadmap, prioritize the deliverables Responsible for end to end project delivery of Big Data Solutions from project estimations, project planning, resourcing and monitoring perspective Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects Monitor and review the status of the project and ensure that the deliverables are on track with respect to scope, budget and time Transparently communicate the status of the project to all the stakeholders on a regular basis Identify and manage risks / issues related to deliverables and arrive at mitigation plans to resolve the issues and risks Seek proactive feedback continuously to identify areas of improvement Ensure the team is creating and maintaining the knowledge artifacts with reference to the project deliverables   Mandatory Skills:   Hands on experience in design, development and managing big data technologies Experience of managing projects in the area of Big Data, Data warehousing, Business Intelligence using open source or top of the line tools and technologies Good knowledge of Dimensional Modeling Experience of managing medium to large projects Proven experience in project planning, estimation, execution and implementation of medium to large projects Proficient with various development methodologies like waterfall, agile/scrum and iterative Good Interpersonal skills and excellent communication skills Advanced level Microsoft Project, PowerPoint, Visio, Excel and Word Other Skills:   Knowledge of Big Data ecosystem Business Domain Knowledge Project Management Training/Certification such as PMI's Project Management Professional (PMP) or Certified SCRUM Master

Job posted by
apply for job
apply for job
Nikita Aher picture
Nikita Aher
Job posted by
Nikita Aher picture
Nikita Aher
Apply for job
apply for job

Big Data Engineer

Founded 2013
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2.5 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 100000, max: 800000, duration: "undefined", currency: "INR", equity: false})}}

Job Title/Designation: Big Data Engineers - Hadoop, Pig, Hive, Spark Employment Type: Full Time, Permanent Job Description:   Work Location - Pune Work Experience - 2.5 to 6 Years   Note - Candidates with short notice periods will be given preference.   Mandatory Skills: Working knowledge and hands-on experience of Big Data / Hadoop tools and technologies. Experience of working in Pig, Hive, Flume, Sqoop, Kafka etc. Database development experience with a solid understanding of core database concepts, relational database design, ODS & DWH. Expert level knowledge of SQL and scripting preferably UNIX shell scripting, Perl scripting. Working knowledge of Data integration solution and well-versed with any ETL tool (Informatica / Datastage / Abinitio/Pentaho etc). Strong problem solving and logical reasoning ability. Excellent understanding of all aspects of the Software Development Lifecycle. Excellent written and verbal communication skills. Experience in Java will be an added advantage Knowledge of object oriented programming concepts Exposure to ISMS policies and procedures.

Job posted by
apply for job
apply for job
Nikita Aher picture
Nikita Aher
Job posted by
Nikita Aher picture
Nikita Aher
Apply for job
apply for job

Data Scientist

Founded 2004
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Bengaluru (Bangalore)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Who we are? Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realize the “Next” in the “Now” for our Clients. We specialize in Cloud Data Engineering, AI/Machine Learning, and Advanced Cloud infra techs such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to the cloud. What do we believe? Best practices are overrated Implementing best practices can only make one n ‘average’. Honesty and Transparency We believe in the naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead.  And our sales team comprises 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction : As a Senior Data Scientist, the candidate will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges, from developing new models using pre-existing components to enable current systems to be more intelligent. You should be able to train models using existing data and use them in the most creative manner to deliver the smartest experience to customers. You will have to develop sophisticated enterprise/cloud technology applications that push the threshold of intelligence in machines. Working on multiple projects at a time, you maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You thrive in a fast, high-energy environment and are able to balance multiple projects in real-time. The thrill of the next big challenge drives you, and when faced with an obstacle, you find clever solutions. You must have the ability and interest to work on a range of different types of projects and business processes and must have a background that demonstrates this Ability. Your bucket of undertaking : ● Collaborate with team members to develop new models to be used for classification problems ● Responsible for software profiling, performance tuning and analysis, and other general software engineering tasks ● Use independent judgment to take existing data and build new models from it ● Collaborate and provide technical guidance ● Coming up with new ideas, rapid prototyping, and converting prototypes into scalable products ● Conducting experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences ● Improving existing models and creating self-learning systems. ● Stakeholder management and leadership ● Decision making and problem-solving Fit Assessment : A Searce team member is a highly motivated individual with a phenomenal amount of passion and energy for whatever he/she engages in; Who respects honesty, integrity, initiative, and creative approach to problem-solving; An inspiration to colleagues, he/she is a tenacious, and highly driven professional with a proven record of success and with a strong empathy for people - clients, partners, colleagues or vendors. Accomplishment Set : ● Extensive experience with Hadoop and Machine learning algorithms ● Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them ● Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence) ● Passion for solving NLP problems ● Experience with specialized tools and project for working with natural language Processing ● Programming experience in Python ● Knowledge of machine learning frameworks like Tensorflow, Pytorch ● Experience with software version control systems like Github ● Fast learner and be able to work independently as well as in a team environment      with good written and verbal communication skills Education and Experience : ● B. E. / B. Tech / Masters in Computer Science ● Strong in academics and good aptitude ● Excellent communication skills with a flair to learn quickly ● 4-8 years of relevant experience ● Research and implement novel machine learning and statistical approaches ● Prior exposure to product development would be an added advantage

Job posted by
apply for job
apply for job
Adarsh Charles picture
Adarsh Charles
Job posted by
Adarsh Charles picture
Adarsh Charles
Apply for job
apply for job

Big Data Developer

Founded 2000
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai, Pune
Experience icon
4 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Role Summary/Purpose:We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.   Requirements: The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment. Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc… Excellent knowledge in SQL & Linux Shell scripting Bachelors/Master’s/Engineering Degree from a well-reputed university. Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment Ability to manage a diverse and challenging stakeholder community Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.   Responsibilities Should works as a senior developer/individual contributor based on situations Should be part of SCRUM discussions and to take requirements Adhere to SCRUM timeline and deliver accordingly Participate in a team environment for the design, development and implementation Should take L3 activities on need basis Prepare Unit/SIT/UAT testcase and log the results Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time. Quality delivery and automation should be a top priority Co-ordinate change and deployment in time Should create healthy harmony within the team Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders

Job posted by
apply for job
apply for job
Rashmi Poovaiah picture
Rashmi Poovaiah
Job posted by
Rashmi Poovaiah picture
Rashmi Poovaiah
Apply for job
apply for job

Big Data Engineer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune
Experience icon
3 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

ob Title/Designation:Mid / Senior Big Data EngineerJob Description:Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.Must Have: 4-10 years of experience in software development. At least 2 years of relevant work experience on large scale Data applications. Strong coding experience in Java is mandatory Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate Should be able to do coding, debugging, performance tuning and deploying the apps to Prod. Should have good working experience on o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet) o Kafka o J2EE Frameworks (Spring/Hibernate/REST) o Spark Streaming or any other streaming technology. Strong coding experience in Java is mandatory Ability to work on the sprint stories to completion along with Unit test case coverage. Experience working in Agile Methodology Excellent communication and coordination skills Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools. Must be able to integrate quickly into the team and work independently towards team goals Role & Responsibilities: Take the complete responsibility of the sprint stories' execution Be accountable for the delivery of the tasks in the defined timelines with good quality. Follow the processes for project execution and delivery. Follow agile methodology Work with the team lead closely and contribute to the smooth delivery of the project. Understand/define the architecture and discuss the pros-cons of the same with the team Involve in the brainstorming sessions and suggest improvements in the architecture/design. Work with other team leads to get the architecture/design reviewed. Work with the clients and counter-parts (in US) of the project. Keep all the stakeholders updated about the project/task status/risks/issues if there are any. Education: BE/B.Tech from reputed institute.Experience: 4 to 9 yearsKeywords: java, scala, spark, software development, hadoop, hiveLocations: Pune

Job posted by
apply for job
apply for job
Taruna Roy picture
Taruna Roy
Job posted by
Taruna Roy picture
Taruna Roy
Apply for job
apply for job

Software Developer (ML and AI)

Founded 2019
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
1 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 300000, max: 600000, duration: "undefined", currency: "INR", equity: false})}}

SD (ML and AI) job description:Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )Extensive data modeling and data architecture skillsProgramming experience in Python, RBackground in machine learning frameworks such as TensorFlow or KerasKnowledge of Hadoop or another distributed computing systemsExperience working in an Agile environmentAdvanced math skills (Linear algebraDiscrete mathDifferential equations (ODEs and numerical)Theory of statistics 1Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)Abstract algebraNumber theoryReal analysisComplex analysisIntermediate analysis (point set topology)) ( important )Strong written and verbal communicationsHands on experience on NLP and NLGExperience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.

Job posted by
apply for job
apply for job
Dipendra SIngh picture
Dipendra SIngh
Job posted by
Dipendra SIngh picture
Dipendra SIngh
Apply for job
apply for job

Java Developer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
1 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 700000, duration: "undefined", currency: "INR", equity: false})}}

You will be responsible for design, development and testing of Products Contributing in all phases of the development lifecycle   Writing well designed, testable, efficient code Ensure designs are in compliance with specifications Prepare and produce releases of software components Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review Some of the technologies you will be working on: Core Java, Solr, Hadoop, Spark, Elastic search, Clustering, Text Mining, NLP, Mahout and Lucene etc.

Job posted by
apply for job
apply for job
Saujanya Sathe picture
Saujanya Sathe
Job posted by
Saujanya Sathe picture
Saujanya Sathe
Apply for job
apply for job

Data Scientist

Founded 2019
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

The selected would be a part of the inhouse Data Labs team. He/she would be responsible to creation insights-driven decision structure.This will include:ScorecardsStrategiesMISThe verticals included are:RiskMarketingProduct

Job posted by
apply for job
apply for job
Ankit Goenka picture
Ankit Goenka
Job posted by
Ankit Goenka picture
Ankit Goenka
Apply for job
apply for job

Technology Lead-JAVA/AWS

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
7 - 14 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Looking for JAVA Tech Lead-AWS/HAdoop Experienced Person. Product based firm preferred.Must have handled teaam size of 10plus people

Job posted by
apply for job
apply for job
Vartica Lal picture
Vartica Lal
Job posted by
Vartica Lal picture
Vartica Lal
Apply for job
apply for job

Bigdata Lead

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 100000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Java Application Developer (4+ Yrs of Workex), Graph Based Product Dev

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

We are looking to hire passionate Java techies who will be comfortable learning and working on Java and any open source frameworks & technologies. She/he should be a 100% hands-on person on technology skills and interested in solving complex analytics use cases. We are working on a complete stack platform which has already been adopted by some very large Enterprises across the world. Candidates with prior experience of having worked in typical R&D environment and/or product based companies with dynamic work environment will be have an additional edge. We currently work on some of the latest technologies like Cassandra, Hadoop, Apache Solr, Spark and Lucene, and some core Machine Learning and AI technologies. Even though prior knowledge of these skills is not mandatory at all for selection, you would be expected to learn new skills on the job.

Job posted by
apply for job
apply for job
Neha Ambastha picture
Neha Ambastha
Job posted by
Neha Ambastha picture
Neha Ambastha
Apply for job
apply for job

Director of Engineering

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
8 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 2500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Job Summary: In this position, you will manage and provide technical leadership of the product team. Leadership, communication, prioritization and a focus on excellence are essential characteristics for this role Responsibilities and Duties: - Manage (recruit, motivate, develop, strengthen) the product engineering team - Mentor and lead the engineering team as a subject matter expert for all technology and architecture related issues. - Architect, Design, Develop & Implement frameworks and application software components using Cloud and Enterprise/Open Source technologies - Be accountable for the overall technical excellence and quality of the platforms - Should be proactive and enhance existing software architecture by analyzing and identifying areas - Create and Manage technology strategy that can serve the business strategy - Low burn, highly iterative new product development and testing - 80/20 rule: effectively create low resource high impact technology solutions - Future ready: always looking to disrupt and challenge the status quo Required Experience, Skills and Qualifications: - B.E in Computer Science or equivalent with demonstrated problem-solving and leadership skills to pursue correct engineering process in adverse conditions. Ability to embrace and demonstrate leadership beyond ownership - Articulates a clear technology vision that is inspiring and aligned with business needs and Experienced in articulating the business goals and cascading them down the organization appropriately so that every person in the organization is appropriately stretched to achieve outcomes - Minimum of 8 – 10 years of progressive experience in software engineering out of which 70% should be in startup companies / environment, leadership capacity and experience across variety of technology stacks (from Conception to Go-Live). Ability to work efficiently in an entrepreneurial and in a startup environment. - Strong experience in building and deploying Enterprise grade products - Excellent and robust understanding of scalable product system architecture(s), platforms and core technologies - Strong experience in RDBMS & preferably worked with NoSQL & Hadoop as well and worked across multiple platforms (Front-end, Middleware) - Well versed in technologies like Angular, Core JS, Java, Python, etc.

Job posted by
apply for job
apply for job
Aditya Bhelande picture
Aditya Bhelande
Job posted by
Aditya Bhelande picture
Aditya Bhelande
Apply for job
apply for job

Hadoop Developer

Founded 2008
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.

Job posted by
apply for job
apply for job
Ramakrishna Murthy picture
Ramakrishna Murthy
Job posted by
Ramakrishna Murthy picture
Ramakrishna Murthy
Apply for job
apply for job

Big Data

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 500000, duration: "undefined", currency: "INR", equity: false})}}

We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.

Job posted by
apply for job
apply for job
Shekhar Singh kshatri picture
Shekhar Singh kshatri
Job posted by
Shekhar Singh kshatri picture
Shekhar Singh kshatri
Apply for job
apply for job

Sr. Tech Lead

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
8 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1700000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities: Responsible for all aspects of development and support for internally created or supported application software, including: the development methodologies, technologies (language, databases, support tools), development and testing hardware/software environments, and management of the application development staff and project workload for the agency. Your job is to manage a project and manage a set of engineers. You are responsible for making your team happy and productive, helping them manage their careers. You are responsible for delivering great product on time and with quality. ESSENTIAL DUTIES AND RESPONSIBILITIES • Supervise the projects and responsibilities of the Web and Software Developers. • Responsible for the prioritization of projects assigned to the Application Development team. • Responsible for the complete development lifecycle of the agency software systems; including gathering requirements, database management, software development, testing, implementation, user follow up, support and Project Management. • Responsible for the Integrity, Maintenance and changes to the Application Development Servers and Databases. (DBA) • Responsible for developing and implementing change control processes for the development team to follow. • Provides ad-hoc reporting and decision support required for management decision processes. • Makes technology decisions that effect Software Development. • Works on special I.T. projects as needed. Familiarity with Technologies: • Java, Spring, Hibernate, Laravel • MySQL, MongoDB, Amazon RedShift, Hadoop • Angular.js, Boostrap • AWS cloud infrastructure QUALIFICATIONS • Bachelor’s degree in Information Science or Computer Science required. • 8-10 years of Application Development Experience required. • Five plus years of Database Design and Analysis required. • Strong verbal communication skills required.

Job posted by
apply for job
apply for job
Aditya Bhelande picture
Aditya Bhelande
Job posted by
Aditya Bhelande picture
Aditya Bhelande
Apply for job
apply for job

Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.

Founded 2012
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Exusia
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Chicago, Hyderabad, New York
Experience icon
1 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean "Immensely Powerful and Agile") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. Exusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India). Our mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives. At the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions.

Job posted by
apply for job
apply for job
Dhaval Upadhyay picture
Dhaval Upadhyay
Job posted by
Dhaval Upadhyay picture
Dhaval Upadhyay
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done