
Innovate and develop interactive ways to enhance English learning through real-world activities. This role is ideal for those passionate about language education and creative teaching methods.
Benefits:
● Experience in language learning methodologies
● Opportunity to develop experiential learning programs
● Hands-on teaching experience
Key Responsibilities:
✔ Design real-world activities to enhance English learning
✔ Conduct interactive and engaging sessions
✔ Work with students to improve communication skills
Requirements:
● Strong command of English language and communication skills
● Creativity in designing experiential learning methods

About Innovators International School
About
Company social profiles
Similar jobs
An L2 Technical Support Engineer with Python knowledge is responsible for handling escalated, more complex technical issues that the Level 1 (L1) support team cannot resolve. Your primary goal is to perform deep-dive analysis, troubleshooting, and problem resolution to minimize customer downtime and ensure system stability.
Python is a key skill, used for scripting, automation, debugging, and data analysis in this role.
Key Responsibilities
- Advanced Troubleshooting & Incident Management:
- Serve as the escalation point for complex technical issues (often involving software bugs, system integrations, backend services, and APIs) that L1 support cannot resolve.
- Diagnose, analyze, and resolve problems, often requiring in-depth log analysis, code review, and database querying.
- Own the technical resolution of incidents end-to-end, adhering strictly to established Service Level Agreements (SLAs).
- Participate in on-call rotation for critical (P1) incident support outside of regular business hours.
- Python-Specific Tasks:
- Develop and maintain Python scripts for automation of repetitive support tasks, system health checks, and data manipulation.
- Use Python for debugging and troubleshooting by analyzing application code, API responses, or data pipeline issues.
- Write ad-hoc scripts to extract, analyze, or modify data in databases for diagnostic or resolution purposes.
- Potentially apply basic-to-intermediate code fixes in Python applications in collaboration with development teams.
- Collaboration and Escalation:
- Collaborate closely with L3 Support, Software Engineers, DevOps, and Product Teams to report bugs, propose permanent fixes, and provide comprehensive investigation details.
- Escalate issues that require significant product changes or deeper engineering expertise to the L3 team, providing clear, detailed documentation of all steps taken.
- Documentation and Process Improvement:
- Conduct Root Cause Analysis (RCA) for major incidents, documenting the cause, resolution, and preventative actions.
- Create and maintain a Knowledge Base (KB), runbooks, and Standard Operating Procedures (SOPs) for recurring issues to empower L1 and enable customer self-service.
- Proactively identify technical deficiencies in processes and systems and recommend improvements to enhance service quality.
- Customer Communication:
- Maintain professional, clear, and timely communication with customers, explaining complex technical issues and resolutions in an understandable manner.
Required Technical Skills
- Programming/Scripting:
- Strong proficiency in Python (for scripting, automation, debugging, and data manipulation).
- Experience with other scripting languages like Bash or Shell
- Databases:
- Proficiency in SQL for complex querying, debugging data flow issues, and data extraction.
- Application/Web Technologies:
- Understanding of API concepts (RESTful/SOAP) and experience troubleshooting them using tools like Postman or curl.
- Knowledge of application architectures (e.g., microservices, SOA) is a plus.
- Monitoring & Tools:
- Experience with support ticketing systems (e.g., JIRA, ServiceNow).
- Familiarity with log aggregation and monitoring tools (Kibana, Splunk, ELK Stack, Grafana)
Key Responsibilities:
• Develop detailed financial models including 3-statement, DCF, LBO, and scenario/sensitivity analysis
• Draft valuation reports aligned with SEBI, RBI, IBC, and Companies Act guidelines
• Create compelling pitch decks, teasers, and investment memorandums
• Conduct industry-specific research and apply appropriate valuation methodologies
• Ensure precision, compliance, and high-quality output in all deliverables
• Collaborate with internal teams and clients during the afternoon shift
Candidate Requirements :
Experience: 1–2 years in financial modelling and valuation
• Education: MBA (Finance),
Semi-qualified CA or CFA • Skills:
Advanced Excel & Google Sheets (Pivot Tables, Power Query, Index-Match, etc.)
o PowerPoint proficiency for investor presentations
o Strong understanding of accounting, finance, and valuation techniques
o Excellent English communication (written & verbal)
o Discipline to work independently in a remote setup with a fixed 2 PM–11 PM schedule
Working Days
6 working days
Working Timings
2-11pm
We Help Our Customers Build Great Products.
Innominds is a trusted innovation acceleration partner focused on designing, developing and delivering technology solutions for specialized practices in Big Data & Analytics, Connected Devices, and Security, helping enterprises with their digital transformation initiatives. We built these practices on top of our foundational services of innovation, like UX/UI, application development and testing.
Over 1,000 people strong, we are a pioneer at the forefront of technology and engineering R& D, priding ourselves as being forward thinkers and anticipating market changes to help our clients stay relevant and competitive.
About the Role:
We are looking for a seasoned Data Engineering Lead to help shape and evolve our data platform. This role is both strategic and hands-on—requiring leadership of a team of data engineers while actively contributing to the design, development, and maintenance of robust data solutions.
Key Responsibilities:
- Lead and mentor a team of Data Engineers to deliver scalable and reliable data solutions
- Own the end-to-end architecture and development of data pipelines, data lakes, and warehouses
- Design and implement batch data processing frameworks to support large-scale analytics
- Define and enforce best practices in data modeling, data quality, and system performance
- Collaborate with cross-functional teams to understand data requirements and deliver insights
- Ensure smooth and secure data ingestion, transformation, and export processes
- Stay current with industry trends and apply them to drive improvements in the platform
Requirements
- Strong programming skills in Python
- Deep expertise in Apache Spark, Big Data ecosystems, and Airflow
- Hands-on experience with Azure cloud services and data engineering tools
- Strong understanding of data architecture, data modeling, and data governance practices
- Proven ability to design scalable data systems and enterprise-level solutions
- Strong analytical mindset and problem-solving skills
For our company to deliver world-class products and services, our business depends on recruiting and hiring the best and the brightest from around the globe. We are looking for the engineers, designers and creative problem solvers that stand out from the rest of the crowd but are also humble enough to continue learning and growing, are eager to tackle complex problems and are able to keep up with the demanding pace of our business. We are looking for YOU!
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
CommVault Consulting Services helps customers overcome the inherent challenges of independently designing, planning, and building-out new modern data & information management environments.
We have an outstanding career opportunity for a successful Implementation Specialist to be part of our Professional Services team. This team member will be part of our professional services organization and will report directly to the Area Services Manager. This new member will be responsible for delivering solution deployments to our customers throughout the US and Canada. The perfect candidate will bring a positive attitude, efficient time management, innovative ideas, hard work ethic, and deliver quality customer service to our clients.
Job Description
How You Will Make an Impact
- Interface directly with clients to review and discuss deployments, expanding the scale of these projects
- Complete the scope of work as defined by client and sales team
- Validate all CommVault-completed tasks to ensure proper final configuration of the Commvault solution with customer
- Ensure customer satisfaction during implementation
- Assist team members, as needed
What You Need to Be Ready
- 7+ years of data protection experience
- 2+ years of consulting experience
- Experience with disk/tape storage hardware (HDS, Dell/EMC, NetApp, Oracle, Quantum, etc.)
- Cloud storage experience (Azure, AWS, Oracle)
- Proficiency with backup and recovery of Microsoft SQL, Exchange, and SharePoint
- Technical skills in Oracle, SAP, or other database platforms
- Previous Experience with backup software
- CommVault certified
- Bachelor’s degree
What does the core role include?
- Designing and developing high-volume, low-latency applications for mission-critical systems and delivering high-availability and performance
- Designing stateless components in React Native
- Contributing in all phases of the development lifecycle
- Working with the developers to create and maintain a robust framework to support the apps
- Working with the developers to build the interface with a focus on usability features
What else can you expect in the role?
- Prepare and produce releases of software components
- Optimizing performance for the apps
- Problem-solving skills, analytical mind, and positive attitude
- Ability to think from end users perspective and focused on improving the overall product experience.
- Deliver across the entire app life cycle concept, design, build, deploy, test, release to app stores and support
What can fetch you brownie points?
- Hands on experience with React Native is required
- Hands-on experience in React Native APIs, ReactJS, Javascript, ECMAScript (OOJS) and JSX.
- Strong understanding of JavaScript ecosystem
- Hands on experience on Android in creating Hybrid / Native applications
- Demonstrable UI/UX experience on a large-scale app.
- Thorough understanding of React Native development tools like IDEs (Nuclide, Atom, Sublime Text, or Visual
- Good knowledge of JS frameworks like ReactJS is a plus.
Desired Skills and Experience
Hybrid Apps,React Native, Native iOS and Android Architecture understanding
Unstop (Formerly Dare2Compete) is looking for Frontend and Full Stack Developers. Developer responsibilities include building our application from concept all the way to completion from the bottom up, fashioning everything from the home page to site layout and function.
Responsibilities of the Candidate:
- Write well-designed, testable, efficient code by using the best software development practices
- Integrate data from various back-end services and databases
- Gather and refine specifications and requirements based on technical needs
- Be responsible for maintaining, expanding, and scaling our products
- Stay plugged into emerging technologies/industry trends and apply them into operations and activities
- End-to-end management and coding of all our products and services
- To make products modular, flexible, scalable and robust
Our Tech Stack:
- Angular 10 or later
- PHP Laravel
- NodeJS
- MYSQL 8
- NoSQL DB
- Amazon AWS services – EC2, WAF, EBS, SNS, SES, Lambda, Fargate, etc.
- The whole ecosystem of AWS
Required Experience, Skills, and Qualifications:
- Freshers and Candidates with a maximum of 10 years of experience in the technologies that we work with
- Proven working experience in programming – Full Stack
- Top-notch programming and analytical skills
- Must know and have experience in AngularJS 2 onwards
- A solid understanding of how web applications work including security, session management, and best development practices
- Adequate knowledge of relational database systems, Object-Oriented Programming and web application development
- Ability to work and thrive in a fast-paced environment, learn rapidly and master diverse web technologies and techniques
- B.Tech in Computer Science or a related field or equivalent
Salary: As per industry benchmarks. This won’t be a restriction for the right candidate.
Yearly salary range: ₹4L - ₹8L + Bonus
All employees will receive a proportional yearly profit share based on their salary. In total, 10% of WhyDonate’s profits will be paid out to employees.
Job Description
We are looking for a full-time remote Back-end JavaScript Developer with 3+ years of experience.
You will be working on new features and maintenance of the WhyDonate platform. Code quality and scalability are a must. Besides writing new code, you will be fixing bugs and monitoring any new errors that come in. You will be working in a fully remote team of 10+ people.
WhyDonate is a product-based company that has been live since 2013. WhyDonate is a fundraising platform that helps charities and individuals to collect donations online.
Hiring Process
- Introduction interview
- Technical interview
- Test assessment
Must-haves
- 4+ years of Software development experience
- 3+ years of experience with JavaScript/TypeScript
- Experience with REST API's
- Good English language and communication skills
Nice to have
- Experience with back-end development (Node.js)
- Experience with DevOps and Cloud Hosting (Cloudflare, CI/CD)
- Experience with mobile development (React-Native)
- Worked on multiple publicly released web applications
- Knowledge of Elasticsearch
- Knowledge of Redis
- Knowledge of JavaScript testing frameworks
Responsibilities
- Feature development
- Monitoring and fixing bugs
- Uptime, performance and security
- Architecture
- Code quality
Strong experience in customizing and extending Magento1 and
Magento2 platforms, PHP coding, MySQL administration and
optimization, and JavaScript, HTML, CSS, XML.
Desired Candidate Profile:-
An expert level knowledge of OO PHP, MySQL, Linux
(LAMP Stack) aimed at Magento.
Magento2 certified candidates are preferred.
Expertise with development life-cycle methodologies and
best practice.
A good understanding of web frontend technologies
(Javascript, HTML & CSS, SCSS)
Proven experience building bespoke extensions for Magento
2
Demonstrable integration experience of using third party
web services through SOAP & REST (Payment, fulfillment
etc)
A deep understanding of the use of a git based workflow
system and git-flow.
Good understanding of build and dependency management
tools such as Composer, Homebrew, Gulp.









