
RELEASE MANAGER
Bangalore/Pune
Full Time
We are seeking a collaborative, organized, process and detail oriented Release manager to drive high quality, on-time deployment of quarterly product releases by designing, implementing and executing effective release management processes in a collaboration with global Engineering, Operations, Product Management, services and Technical Support teams.
Responsibilities
- Managing risks and resolving challenges that impact release scope, quality, and
- schedules
- Planning release windows and cycles across portfolios, components
- Managing relationships working on interrelated processes
- Communicating crucial release plans and changes
- Measuring and monitoring progress to achieve a timely software release within
- defined budgetary limits and defined quality standards
- Producing deployment, implementation, and run books plans
- Conducting release readiness and milestone reviews
- Maintaining release schedules for every core service and ensuring it aligns with major vendors and other stakeholder. Qualifications
- Mid-Senior: 7-12+ years of experience
- Experience as a release manager leading a substantial feature/product in the enterprise
- space
- Curiosity to learn + Extreme ownership + EQ/interpersonal skills
- Enjoys streamlining complex engineering processes and wants to gain mastery ∙
- Excellent written and verbal communication skills
- Excellent analytical skills
- Structured thinking
- Experience in coordinating cross-functional work teams up to task completion.
We have a passion for excellence. We love what we’re doing and have fun doing it. We value great minds, but we’re also humble enough to rally around the best solution, regardless of whose idea it was. We appreciate talent, and we are always on the lookout for the right people to join us. dial for info nine one zero one four double three eight one two.

Similar jobs
We are looking to expand our existing Python team across our offices in Surat. This position is for SDE-1 - Junior Software Engineer.
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Role Objectives
-
Participate in complex design and software development tasks within an appropriate schedule, task, and quality guidelines set in conjunction with the Senior Platform Specialists and Platform Architect
-
Hands-on development and implementation of solutions including the configuration of e.IQ platform components and custom development
-
Tasks associated with testing and documentation of solution implemented
Academic Qualification
B.E/B.Tech/MCA in computer science Engineering or a related field.
Experience Profile
4 -6 years of experience in development or platform implementations.
Required Technical Skillset for Java Development
- core JavaScript, CSS, HTML-5,
- At least one of these frameworks (Angular JS / React /Vue / jQuery / Bootstrap)
- Experience in developing front-end web applications for various devices (responsive)
Good to have:
-
Basic Java knowledge /Programming, RDBMS (preferably MySQL or MSSQL), Exposure to Liferay.
-
Must be capable of coding and unit testing their own code
-
Team player with good interpersonal skills
-
Should have strong analytical skills
-
Good communication skills
-
Capable of juggling several priorities and delivering results on time in a high-pressure, dynamic environment.
- Conducting a demo session with the parents and kids to explain the programs.
- Interacting with different departments to close the sales.
- Guiding the students to a brighter career path (over a video call)
- Build a repo with the customer.
Responsibilities and Duties:
1. Meet leads for assessment and solution offering. (Leads and appointments will be provided by company)
2. Prepare quotation as per client requirements
3. Handle and resolve feedback, complaints or conflicts
4.Team Handling, Highly self-motivated
5.Possess strong work ethics and keep information confidential.
6. Two wheeler compulsory
7. DL compulsory
8 . If from paint background than it will add bonus
Qualifications and Skills
Any Graduate ,have completed degree in civil B.E .
Ready to travel
Benefits
1. High growth environment
2. Travelling Allowance
3. Provident Fund
4. Health Insurance
Job Types: Full-time, Regular / Permanent
Salary: ₹15,000.00 - ₹30,000.00 per month
Benefits:
- Cell phone reimbursement
- Health insurance
- Life insurance
- Paid sick time
- Provident Fund
QA AUTOMATION TESTER:
- Willing to contribute to Small or Large Project Teams
- Must be willing to start Immediately work from home permanent
- Good Communication Skills (English)
- Can Allocate working hours (Flexible) as per Projects Req.
- Must have Good Internet & Good Laptop/System
2. Knowledge of JSON, XML, REST, SOAP, Git/SVN, Mvvm Architecture
3. The ability to manage scripts, CRON jobs.
4. Familiarity with continuous integration/build tools such as Jenkins, gitlab, basic CI CD.
5. Object oriented programming using node.js(Preferred)
6. Experience with frameworks such as Express js, Laravel framework is an added advantage
7. The candidate will also be responsible for developing webservices, dashboard, etc.for android app/web portal
8. Good exposure in OOPS and core node.js.
9. Knowledge of e-commerce will be preferred.
10. Knowledge of joomla, drupal, magento and other cms will be an advantage
11. Hacker ability
12. Familiarity with php, Python and Server side devops.
13. Should have knowledge of Android SDK, different versions of Android, and how to deal with different screen sizes
14. Should have knowledge of Android UI design principles, patterns, and best practices
15. Should have the ability to design applications around natural user interfaces
16. Should have an understanding of Google's Android design principles (material design) and interface guidelines
17. Should have a good understanding of activity, fragments lifecycle
18. Should have debugging and problem-solving skills
19. Familiar with database design principles








