
- 2-4 years of experience.
- Fluent in English and Hindi Language.
- Previous counselling experience with students/parents/guardians.
- We can target admission coordinators/counsellors from premium coaching institutes, colleges, and universities.
It will be a Work From office. The location is Bangalore.
About the role
- To assist interested students with their application process.
- To educate Parents/Guardians/Students about Scaler's undergraduate program and counsel them about their learning needs.
- To demonstrate the value proposition of Scaler's UG program & nurture interest and push them further into the positive funnel.
- Building a communication journey with high potential leads to completing the admission process.
- Following up with the shortlisted candidates in completing the payment formalities.
- Assisting and coordinating in loan processing and refund-related issues.
- To provide campus tours for parents/guardians if required.
Required Skills
- Excellent verbal and written communication skills in both English and Hindi.
- Exceptional counselling skills to collaborate with parents/guardians to assist students with educational and career planning.
- Be comfortable with basic Excel skills.
- Assist the team in conducting events competitions, seminars etc. at the college campus.

Similar jobs
We are looking for a passionate and skilled Software Developer with experience in Go (Golang) or Rust to join our product engineering team. You will be responsible for building high-performance, scalable backend systems and contributing to modern, efficient software solutions.
Required Skills:
Design, develop, and maintain backend services using Go or Rust
Write clean, efficient, and well-documented code
Build scalable APIs and microservices
Contribute to system design and architecture discussions
Preferred Qualifications:
Minimum 6 months internship or professional experience in software development
Hands-on project experience in Go (Golang) or Rust
Bachelor’s degree in Engineering, or Master’s degree in Computer Applications (MCA) / Computer Science (M.Sc.)
Strong understanding of data structures, algorithms, and system design basics
Experience working with RESTful APIs and microservices architecture
Familiarity with databases (SQL/NoSQL such as PostgreSQL, MongoDB)
Knowledge of Git and version control practices
Candidates who are available to join immediately or have a notice period of 15 days or less will be preferred
🚀 Hiring: React Native Developer
⭐ Experience: 4+ Years
📍 Location: Jaipur
⭐ Work Mode:- 5 Days Work From Office
⏱️ Notice Period: Immediate Joiners or 15 Days
(Only immediate joiners & candidates serving notice period)
We’re looking for a talented React Native Developer to build high-performing mobile apps for iOS & Android.
Responsibilities:
- Develop & maintain cross-platform apps using React Native
- Collaborate with designers & backend teams
- Integrate APIs, debug & optimize performance
- Deploy apps to App Store & Google Play
Requirements:
- X+ years experience in React Native & JavaScript/TypeScript
- Strong in state management (Redux/Context API/MobX)
- Knowledge of APIs, native build tools, and publishing apps
- Bonus: Firebase, CI/CD, native iOS/Android
Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates.
- Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates.
- 4 to 8 years of experience in design, development and triaging for large, complex systems. Experience in Java and object-oriented design skills
- 3-4+ years of microservices development
- 2+ years working in Spring Boot
- Experienced using API dev tools like IntelliJ/Eclipse, Postman, Git, Cucumber
- Hands on experience in building microservices based application using Spring Boot and REST, JSON
- DevOps understanding – containers, cloud, automation, security, configuration management, CI/CD
- Experience using CICD processes for application software integration and deployment using Maven, Git, Jenkins.
- Experience dealing with NoSQL databases like Cassandra
- Experience building scalable and resilient applications in private or public cloud environments and cloud technologies
- Experience in Utilizing tools such as Maven, Docker, Kubernetes, ELK, Jenkins
- Agile Software Development (typically Scrum, Kanban, Safe)
- Experience with API gateway and API security.
Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.
Need Smart IT support Engg for Desktop / Network / IT support for daily routine troubleshooting, and for ongoing turnkey projects.
Lots of learning on the latest IT products
Training provided
Job Description:
We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
Responsibilities:
- Develop, test, and implement data solutions based on functional / non-functional business requirements.
- You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
- Build Data Models to store the data in a most optimized manner
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Implementing the ETL process and optimal data pipeline architecture
- Monitoring performance and advising any necessary infrastructure changes.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Proactively identify potential production issues and recommend and implement solutions
- Must be able to write quality code and build secure, highly available systems.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Review peer-codes and pipelines before deploying to Production for optimization issues and code standards
Skill Sets:
- Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable data size (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Creation of DAGs for data engineering
- Expert at Python /Scala programming, especially for data engineering/ ETL purposes
Experience: 2-5 Years
Must-Have: .Net Core + Angular 9+,Good Communication
Good to Have: Azure Cloud, Angular Version if above 9, Some exposure to Angular JSKey Responsibilities and Accountabilities
- Experience in handling business requirements independently from technical and functional perspective.
- Responsible & accountable for execution end-2-end projects / part of Project or change requests and defects.
- Resolve technical issue faced for module handled and by the developers.
- Execution of project according to project plan, resolving day-to-day challenges of with the IT development life cycle.
- Always confirm to documentation, coding, and quality standards as defined.
- Prepare for and support user acceptance testing.
- Prepare all necessary documentation and processes to enable support of the systems.
- Effective Communication with internal and external customers, supervisors and management.
Responsibilities:
*Approaching corporate clients and retail clients for event
*Understanding the complete requirements
*Sending proposal & quotations
*Getting the deal signed
*Dealing with guests over the telephone
*Putting conference dates in the diary
*Negotiate the rates
*Ensuring marketing promotions run at the right time









