
Must Have Requirements
1. Fluency In English And Tamil.
2. Sales Experience In Selling To Industries, Small And Medium Businesses In Industrial Areas Dealing Involved In - Manufacturing, Garments, Pharma Etc
3. Experience In Working With Channel Partners And Agents.
Nice To Have Requirements - B2B Sales In Financial Services, Insurance, Software Etc.

Similar jobs
Job Title: SEO Marketing Specialist (Full-time)
Location: Sector 18, Noida, Onsite
Experience Required: 2+ Years
About the Role
We are seeking an SEO Marketing Specialist with proven expertise in driving organic growth within the Web3 ecosystem. The role requires a deep understanding of blockchain, crypto, NFTs, DeFi, and dApps to create strategies that boost visibility, improve rankings, and establish authority in this rapidly evolving industry.
Key Responsibilities
- Conduct keyword research specific to blockchain, crypto, NFT, and Web3-related topics.
- Optimize website pages, blogs, and landing pages for search engines.
- Implement on-page SEO best practices (meta tags, internal linking, headers, alt text, schema).
- Build high-quality backlinks via guest posting, PR, and partnerships.
- Improve technical SEO performance (site speed, indexing, structured data, mobile optimization).
- Monitor and analyze SEO metrics using Google Analytics, Search Console, Ahrefs, SEMrush.
- Collaborate with content writers and marketing teams to produce SEO-friendly Web3 content.
- Stay ahead of Web3 trends and adapt SEO strategies accordingly.
Required Skills
- Strong knowledge of SEO tools (Ahrefs, SEMrush, Moz, GA, GSC).
- Expertise in on-page, off-page, and technical SEO.
- Experience in content optimization & SEO copywriting.
- Solid understanding of the Web3 ecosystem (blockchain, cryptocurrencies, NFTs, DeFi, dApps).
- Familiarity with CMS platforms (WordPress, Webflow, etc.).
- Analytical mindset with ability to create SEO reports & insights.
Job Title: Front-End Developer – IVR & Web Technologies
Location: Chennai ( Taramani)
Experience: 1.-3 Years
Employment Type: Full-time
Job Summary:
We are looking for a highly skilled Front-End Developer with strong expertise in Angular, React.js, and JavaScript, and a good understanding of IVR systems, Perl/Bash scripting, and WebRTC. The ideal candidate should also be familiar with data visualization using D3.js and basic knowledge of web services (REST/SOAP).
Key Responsibilities:
- Develop and maintain front-end applications using Angular and React.js
- Create dynamic and interactive UIs using JavaScript, D3.js, and modern frameworks
- Work with IVR systems and integrate them into web-based dashboards
- Write and manage Perl and Bash scripts to automate tasks and system interactions
- Implement and support WebRTC features for real-time communication interfaces
- Consume and interact with REST/SOAP web services
- Collaborate with back-end developers, designers, and system engineers to deliver high-quality solutions
- Troubleshoot and debug cross-browser and cross-platform issues
Key Skills:
- ✅ Strong knowledge of Angular(16+)
- ✅ Strong expertise in JavaScript and React.js
- ✅ Experience working with IVR (Interactive Voice Response) systems
- ✅ Hands-on with Perl and Bash scripting
- ✅ Experience with WebRTC implementation in front-end apps
- ✅ Familiarity with D3.js for data visualization
- ✅ Understanding of basic web services (REST/SOAP)
- ✅ Knowledge of front-end testing frameworks is a plus
The Sr AWS/Azure/GCP Databricks Data Engineer at Koantek will use comprehensive
modern data engineering techniques and methods with Advanced Analytics to support
business decisions for our clients. Your goal is to support the use of data-driven insights
to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and
patterns, insights, and trends to decision-makers. You will help design and build data
pipelines, data streams, reporting tools, information dashboards, data service APIs, data
generators, and other end-user information portals and insight tools. You will be a critical
part of the data supply chain, ensuring that stakeholders can access and manipulate data
for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and
communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Requirements:
Strong experience as an AWS/Azure/GCP Data Engineer and must have
AWS/Azure/GCP Databricks experience. Expert proficiency in Spark Scala, Python, and spark
Must have data migration experience from on-prem to cloud
Hands-on experience in Kinesis to process & analyze Stream Data, Event/IoT Hubs, and Cosmos
In depth understanding of Azure/AWS/GCP cloud and Data lake and Analytics
solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks. Extensive hands-on experience implementing data migration and data processing
using AWS/Azure/GCP services
In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib
Hands-on experience with the Technology stack available in the industry for data
management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc
Hands-on knowledge of data frameworks, data lakes and open-source projects such
asApache Spark, MLflow, and Delta Lake
Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN]
Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair
Experience preparing data for Data Science and Machine Learning with exposure to- model selection, model lifecycle, hyperparameter tuning, model serving, deep
learning, etc
Demonstrated experience preparing data, automating and building data pipelines for
AI Use Cases (text, voice, image, IoT data etc. ). Good to have programming language experience with. NET or Spark/Scala
Experience in creating tables, partitioning, bucketing, loading and aggregating data
using Spark Scala, Spark SQL/PySpark
Knowledge of AWS/Azure/GCP DevOps processes like CI/CD as well as Agile tools
and processes including Git, Jenkins, Jira, and Confluence
Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics
Strong understanding of Data Modeling and defining conceptual logical and physical
data models. Big Data/analytics/information analysis/database management in the cloud
IoT/event-driven/microservices in the cloud- Experience with private and public cloud
architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements
that will enhance data quality and reliability to advance strategic initiatives
Working knowledge of RESTful APIs, OAuth2 authorization framework and security
best practices for API Gateways
Guide customers in transforming big data projects, including development and
deployment of big data and AI applications
Guide customers on Data engineering best practices, provide proof of concept, architect solutions and collaborate when needed
2+ years of hands-on experience designing and implementing multi-tenant solutions
using AWS/Azure/GCP Databricks for data governance, data pipelines for near real-
time data warehouse, and machine learning solutions. Over all 5+ years' experience in a software development, data engineering, or data
analytics field using Python, PySpark, Scala, Spark, Java, or equivalent technologies. hands-on expertise in Apache SparkTM (Scala or Python)
3+ years of experience working in query tuning, performance tuning, troubleshooting, and debugging Spark and other big data solutions. Bachelor's or Master's degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience
Ability to manage competing priorities in a fast-paced environment
Ability to resolve issues
Basic experience with or knowledge of agile methodologies
AWS Certified: Solutions Architect Professional
Databricks Certified Associate Developer for Apache Spark
Microsoft Certified: Azure Data Engineer Associate
GCP Certified: Professional Google Cloud Certified
Job Description:
- This is BPO Night shift job (US Voice process) in Nagercoil.
- This is purely night shift with fix saturday sunday off .
- This is not sales or tele marketing, it is to help the US citizens .
- It is work from office only. with salary range 15000 to 25000 per month along with unlimited incentives based on leads that you generate.(per lead you get Rs 500)
Responsibilities:
- Handle outbound calls to international customers.
- This is US Government project that you will be working on where you get the details of customers and complete the further process.
- Maintain accurate and detailed records of customer interactions and transactions.
- Collaborate with team members to achieve individual and team goals.
- Strive to achieve customer satisfaction and ensure positive feedback.
Requirements:
- Freshers and Experienced both can apply.
- Excellent communication skills / Fluency in English.
- Ensure Timely& Professional Responses to all queries.
- Strong ability to multitask and take fast decisions independently.
- Night shift only.(7.30 PM to 4.30 AM.)
Benefits:
- Competitive salary + incentives.
- After shift Drop facility for females only.
- ESI, PF, and insurance benefits
Key Responsibilities : ( Data Developer Python, Spark)
Exp : 2 to 9 Yrs
Development of data platforms, integration frameworks, processes, and code.
Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages
Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.
Elaborate stories in a collaborative agile environment (SCRUM or Kanban)
Familiarity with cloud platforms like GCP, AWS or Azure.
Experience with large data volumes.
Familiarity with writing rest-based services.
Experience with distributed processing and systems
Experience with Hadoop / Spark toolsets
Experience with relational database management systems (RDBMS)
Experience with Data Flow development
Knowledge of Agile and associated development techniques including:
n
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
So, we are looking for amazing product managers like yourself, self-motivated and driven to build products with a massive impact on society. You get to work closely with IIT/IIM alumni to build India's next biggest community!
PS: Recently, jM Android App won #GooglePlayBestOf2021 Hidden Gem Award. Join us in our mission to spread #1BillionSmiles and become #GooglePlayAppOfTheYear 2022.
Relevant links:
- Website: jumpingminds.ai
- Instagram Page: instagram.com/jumpingminds.ai
- #GooglePlayBestOf2021 Award: jumpingminds.ai/googleplaybestof2021
- Founders:
1. Ariba Khan - linkedin.com/in/ariba-khan-ab8a2944/
2. Piyush Gupta - linkedin.com/in/piyushgupta27/
- 3-6 years of relevant work experience in a Data Engineering role.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- A good understanding of Airflow, Spark, NoSQL databases, Kafka is nice to have.
- Premium Institute Candidates only
Gupshup is a product development company which was IIT incubated in 2005.
Headquartered in Silicon Valley, Gupshup is a global leader in cloud messaging, enabling businesses to build engaging conversational experiences, seamlessly across 30+ messaging channels, using a globally available cloud API.
Gupshup handles over 4.5 billion messages per month and has processed over 225 billion messages, enabling over 36,000 businesses to engage nearly a billion users across channels including SMS, WhatsApp, Facebook Messenger, Twitter, WeChat, Viber, Slack, Android RCS, Mobile App and Mobile Web.
Gupshup offers a comprehensive product portfolio that includes an easy to use omni-channel messaging API, advanced bot building platform and mobile marketing tools. Gupshup has also forged strategic partnerships with Facebook, WhatsApp, Google and Cisco to offer innovative mobile messaging solutions with broad reach.
We have an opportunity for Software Engineer-UI
Job Responsibilities:
Highly skilled at front-end engineering using Object-Oriented JavaScript, various
JavaScript libraries and micro frameworks (jQuery, Angular, Prototype, Dojo, Backbone,
YUI), HTML and CSS.
A very strong Javascript foundation and clear understanding of Javascript classes,
prototype based inheritance, modules, private member scopes, etc
Well-versed in software engineering principles, frameworks and technologies.
Involvement in developing and enhancing main front end platform - website
Experience in performing code analysis, requirements analysis, identification of code
metrics, system risk analysis and software reliability analysis.
Develop specifications and designs for complex applications or modifying/maintaining
complex existing applications
Excellent communication skills
Self-directed team player who thrives in a continually changing environment.
Regards,
Tina Dsouza











