
We are looking "Tele Marketer" for Reputed Client @ Coimbatore Permanent Role
Experience: 0-3 Yrs
Preference:
a. Fluency in English, Tamil and Hindi should be excellent.
b. Should be good in cold calling
Interview Rounds:
1. Online Assessment
2. Face-to-Face Interview in Coimbatore

About Reqroots
About
Connect with the team
Similar jobs
Job Title : Flutter Dart Developer (Backend Heavy - Node.js)
Experience Required : 5+ Years
Location : Bellandur & Manthali, Bangalore – Onsite Only
Type : Contractual
About the Role :
We are looking for an experienced Flutter Dart Developer with a backend-heavy architecture (Node.js) to join our team on a contractual basis.
This role goes beyond basic UI development — we need someone who understands the complexities of security, caching, APIs, SQL, server-side rendering, performance tuning, and scalable backend architecture.
Mandatory Skills : Flutter, Dart, Backend-heavy architecture (Nodejs), RESTful APIs, SQL, Caching, Firebase, State Management (Bloc/Provider/Riverpod), Performance Tuning, Git, Mobile Deployment (iOS & Android), Agile
Key Responsibilities :
- Develop high-performance cross-platform apps using Flutter & Dart.
- Translate complex UI/UX into responsive mobile experiences.
- Collaborate with product, design, and backend (Node.js) teams to deliver scalable features.
- Implement caching, security, SQL, and performance optimization strategies.
- Integrate RESTful APIs, Firebase, and third-party libraries.
- Conduct code reviews and support junior developers.
- Stay updated on emerging mobile/backend technologies.
Required Skills & Qualifications :
- 5+ Years in Mobile Development, 3+ Years with Flutter & Dart.
- Strong knowledge of state management (Bloc, Provider, Riverpod).
- Hands-on experience in Node.js for backend development.
- Expertise in API design, SQL, caching, offline storage, security, and performance tuning.
- Experience with Firebase, push notifications, and app deployment on iOS & Android.
- Familiarity with native mobile development (Kotlin/Java or Swift/Obj-C) is a plus.
- Proficient in Git and Agile methodologies with excellent problem-solving skills.

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
Requirements:
- Python Experience: Minimum 3+ years.
- Software Development Experience: Minimum 8+ years.
- Data Engineering and ETL Workloads: Minimum 2+ years.
- Familiarity with Software Development Life Cycle (SDLC).
- CI/CD Pipeline Development: Experience in developing CI/CD pipelines for large projects.
- Agile Framework & Sprint Methodology: Experience with Jira.
- Source Version Control: Experience with GitHub or similar SVC.
- Team Leadership: Experience leading a team of software developers/data scientists.
Good to Have:
- Experience with Golang.
- DevOps/Cloud Experience (preferably AWS).
- Experience with React and TypeScript.
Responsibilities:
- Mentor and train a team of data scientists and software developers.
- Lead and guide the team in best practices for software development and data engineering.
- Develop and implement CI/CD pipelines.
- Ensure adherence to Agile methodologies and participate in sprint planning and execution.
- Collaborate with the team to ensure the successful delivery of projects.
- Provide on-site support and training in Pune.
Skills and Attributes:
- Strong leadership and mentorship abilities.
- Excellent problem-solving skills.
- Effective communication and teamwork.
- Ability to work in a fast-paced environment.
- Passionate about technology and continuous learning.
Note: This is a part-time position paid on an hourly basis. The initial commitment is 4-8 hours per week, with potential fluctuations.
Join TVARIT and be a pivotal part of shaping the future of software development and data engineering.
- ResponsibilitiesMaintain financial responsibility for all expenses, wages, and asset management
- Identify operational deficiencies and implement plans for improvement
- Create and maintain a weekly report on operations and sales at the branch
- Hire and train all employees of the branch
Qualifications
- Bachelor's degree or equivalent in Business
- 2+ years' of management or supervisory experience
- Experience hiring and training individuals


Primary Responsibilities
- Understand current state architecture, including pain points.
- Create and document future state architectural options to address specific issues or initiatives using Machine Learning.
- Innovate and scale architectural best practices around building and operating ML workloads by collaborating with stakeholders across the organization.
- Develop CI/CD & ML pipelines that help to achieve end-to-end ML model development lifecycle from data preparation and feature engineering to model deployment and retraining.
- Provide recommendations around security, cost, performance, reliability, and operational efficiency and implement them
- Provide thought leadership around the use of industry standard tools and models (including commercially available models and tools) by leveraging experience and current industry trends.
- Collaborate with the Enterprise Architect, consulting partners and client IT team as warranted to establish and implement strategic initiatives.
- Make recommendations and assess proposals for optimization.
- Identify operational issues and recommend and implement strategies to resolve problems.
Must have:
- 3+ years of experience in developing CI/CD & ML pipelines for end-to-end ML model/workloads development
- Strong knowledge in ML operations and DevOps workflows and tools such as Git, AWS CodeBuild & CodePipeline, Jenkins, AWS CloudFormation, and others
- Background in ML algorithm development, AI/ML Platforms, Deep Learning, ML Operations in the cloud environment.
- Strong programming skillset with high proficiency in Python, R, etc.
- Strong knowledge of AWS cloud and its technologies such as S3, Redshift, Athena, Glue, SageMaker etc.
- Working knowledge of databases, data warehouses, data preparation and integration tools, along with big data parallel processing layers such as Apache Spark or Hadoop
- Knowledge of pure and applied math, ML and DL frameworks, and ML techniques, such as random forest and neural networks
- Ability to collaborate with Data scientist, Data Engineers, Leaders, and other IT teams
- Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
- Willing to flex daily work schedule to allow for time-zone differences for global team communications
- Strong interpersonal and communication skills

Designation: Graphics and Simulation Engineer
Experience: 3-15 Yrs
Position Type: Full Time
Position Location: Hyderabad
Description:
We are looking for engineers to work on applied research problems related to computer graphics in autonomous driving of electric tractors. The team works towards creating a universe of farm environments in which tractors can driver around for the purposes of simulation, synthetic data generation for deep learning training, simulation of edges cases and modelling physics.
Technical Skills:
● Background in OpenGL, OpenCL, graphics algorithms and optimization is necessary.
● Solid theoretical background in computational geometry and computer graphics is desired. Deep learning background is optional.
● Experience in two view and multi-view geometry.
● Necessary Skills: Python, C++, Boost, OpenGL, OpenCL, Unity3D/Unreal, WebGL, CUDA.
● Academic experience for freshers in graphics is also preferred.
● Experienced candidates in Computer Graphics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply.
● Software development experience on low-power embedded platforms is a plus.
Responsibilities:
● Understanding of engineering principles and a clear understanding of data structures and algorithms.
● Ability to understand, optimize and debug imaging algorithms.
● Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform
● Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents.
● Optimize runtime performance of designed models.
● Deploy models to production and monitor performance and debug inaccuracies and exceptions.
● Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives.
● Thrive in a fast-paced environment and have the ability to own the project end to end with minimum hand holding
● Learn & adapt new technologies & skillsets
● Work on projects independently with timely delivery & defect free approach.
● Thesis focusing on the above skill set may be given more preference.
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms





