
- Create and manage nurture streams across email, digital, and direct mail that will progress the leads through various sales stages of the buyer's life cycle from awareness to selection
- Run campaigns for lead management, lead nurturing, lead scoring, and lead routing
- Look at holistic data on leads and customers to evaluate successful strategies and areas for improvement/growth.
- Iterate and optimize campaigns to drive the best possible results
- Work with the Product Marketing team to create and develop content-rich nurturing programs using consumer insights data and marketing automation tools
- Partner with marketing to improve strategy and execution of demand generation efforts
- Collaborate with sales operations on improving and troubleshooting issues with the lead management process
- Develop and maintain nurture reporting methodology, reports and dashboards, including performance metrics for nurture tracks and KPIs for overall program performance
- 5-6 years in Marketing or Sales Operations role
- At least 3+ years of experience managing email campaigns and program implementation
- Strong experience using marketing automation tools is essential
- Deep understanding of lead-to-revenue demand funnels, their characteristics and what causes Leads to move across stages
- Experience in B2B software / SaaS space will be a big advantage
- MBA from a reputed college, preferably in Sales and/or Marketing.
- Excellent English presentation, written, and verbal communication skills, with an eye for quality and relevance
- Strength in collaborating with cross-functional teams, including Product Marketing, Sales and Marketing
- Self-directed, organized team player who is capable of hands-on- execution as well as long term business planning
- Implement email marketing strategies to cultivate leads into new customers (and demonstrate with data).
- Set up and document lead nurturing processes
- Plan and implement trigger-based nurturing programs that target all stages of prospect development

Similar jobs

Role Overview
We are seeking an experienced Power BI & Microsoft Fabric Consultant to partner with our BI team for a short-term engagement focused on capability building, governance setup, and platform enablement.
The primary objective of this role is to upskill the team and establish a robust, scalable foundation for Power BI and Microsoft Fabric—covering security, governance, administration, and best practices—so that the team can independently manage and scale the platform post-engagement.
This is a hands-on + coaching role, not just advisory.
Key Responsibilities
1. Power BI & Fabric Enablement
- Conduct structured training sessions and workshops on:
- Power BI (Pro + Fabric-integrated experiences)
- Microsoft Fabric (Lakehouse, Warehouse, Dataflows Gen2, Pipelines, etc.)
- Build foundational understanding of:
- End-to-end analytics workflows in Fabric
- Integration between Power BI and Fabric workloads
- Data Lake will be Snowflake
2. Governance & Security Framework
- Design and implement Power BI & Fabric governance model, including:
- Workspace strategy (Dev/Test/Prod separation)
- Naming conventions and standards
- Content lifecycle management
- Establish security architecture:
- Role-based access control (RBAC)
- Row-Level Security (RLS) / Object-Level Security (OLS)
- Data access patterns across Fabric and Power BI
- Define data sharing and access control processes
3. Administration & Platform Setup
- Configure and optimize:
- Power BI tenant settings
- Fabric capacity (capacity planning, workload management)
- Monitoring and usage metrics
- Set up:
- Deployment pipelines
- CI/CD best practices (where applicable)
- Audit logs and governance controls
4. Best Practices & Standards
- Define and document:
- Development standards (data modeling, DAX, report design)
- Performance optimization guidelines
- Dataset/reusable semantic model strategy
- Establish certification and promotion workflows for datasets and reports
5. Hands-On Implementation
- Work alongside the team to:
- Build or refactor key dashboards/reports using best practices
- Set up Fabric artifacts (Lakehouse/Warehouse/Pipelines)
- Ensure real use cases are implemented, not just theoretical training
6. Knowledge Transfer & Self-Sufficiency
- Provide:
- Playbooks, SOPs, and governance documentation
- Recorded sessions and training materials
- Mentor team members through:
- Office hours / Q&A sessions
- Code and architecture reviews
- Ensure the team can independently:
- Manage Fabric capacity
- Govern Power BI environment
- Implement secure and scalable solutions
Expected Outcomes (End of Engagement)
- Fully defined and implemented Power BI & Fabric governance framework
- Configured and optimized Fabric capacity + Power BI tenant
- Established security and access control processes
- Documented standards, playbooks, and operating model
- BI team capable of independent development, administration, and governance
Required Skills & Experience
Must-Have
- 5+ years of experience in Power BI development and administration
- Hands-on experience with Microsoft Fabric (end-to-end)
- Strong expertise in:
- Power BI governance and tenant administration
- Fabric capacity management
- Security models (RLS, RBAC, data access controls)
- Experience setting up enterprise BI governance frameworks
- Proven track record of training and mentoring teams
Good-to-Have
- Experience with data platforms (Snowflake, Azure, etc.)
- Knowledge of CI/CD for Power BI (DevOps integration)
- Familiarity with data catalog and lineage tools (e.g., Atlan, Alation)
- Understanding of modern data architecture patterns
Hi,
PFB the Job Description for Data Science with ML
Type of hire : PWD and Non PWD
Employment Type : Full Time
Notice Period : Immediate Joiner
Work Days : Mon - Fri
About Ampera:
Ampera Technologies, a purpose driven Digital IT Services with primary focus on supporting our client with their Data, AI / ML, Accessibility and other Digital IT needs. We also ensure that equal opportunities are provided to Persons with Disabilities Talent. Ampera Technologies has its Global Headquarters in Chicago, USA and its Global Delivery Center is based out of Chennai, India. We are actively expanding our Tech Delivery team in Chennai and across India. We offer exciting benefits for our teams, such as 1) Hybrid and Remote work options available, 2) Opportunity to work directly with our Global Enterprise Clients, 3) Opportunity to learn and implement evolving Technologies, 4) Comprehensive healthcare, and 5) Conducive environment for Persons with Disability Talent meeting Physical and Digital Accessibility standards
About the Role
We are looking for a skilled Data Scientist with strong Machine Learning experience to design, develop, and deploy data-driven solutions. The role involves working with large datasets, building predictive and ML models, and collaborating with cross-functional teams to translate business problems into analytical solutions.
Key Responsibilities
- Analyze large, structured and unstructured datasets to derive actionable insights.
- Design, build, validate, and deploy Machine Learning models for prediction, classification, recommendation, and optimization.
- Apply statistical analysis, feature engineering, and model evaluation techniques.
- Work closely with business stakeholders to understand requirements and convert them into data science solutions.
- Develop end-to-end ML pipelines including data preprocessing, model training, testing, and deployment.
- Monitor model performance and retrain models as required.
- Document assumptions, methodologies, and results clearly.
- Collaborate with data engineers and software teams to integrate models into production systems.
- Stay updated with the latest advancements in data science and machine learning.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in computer science, Data Science, Statistics, Mathematics, or related fields.
- 5+ years of hands-on experience in Data Science and Machine Learning.
- Strong proficiency in Python (NumPy, Pandas, Scikit-learn).
- Experience with ML algorithms:
- Regression, Classification, Clustering
- Decision Trees, Random Forest, Gradient Boosting
- SVM, KNN, Naïve Bayes
- Solid understanding of statistics, probability, and linear algebra.
- Experience with data visualization tools (Matplotlib, Seaborn, Power BI, Tableau – preferred).
- Experience working with SQL and relational databases.
- Knowledge of model evaluation metrics and optimization techniques.
Preferred / Good to Have
- Experience with Deep Learning frameworks (TensorFlow, PyTorch, Keras).
- Exposure to NLP, Computer Vision, or Time Series forecasting.
- Experience with big data technologies (Spark, Hadoop).
- Familiarity with cloud platforms (AWS, Azure, GCP).
- Experience with MLOps, CI/CD pipelines, and model deployment.
Soft Skills
- Strong analytical and problem-solving abilities.
- Excellent communication and stakeholder interaction skills.
- Ability to work independently and in cross-functional teams.
- Curiosity and willingness to learn new tools and techniques.
Accessibility & Inclusion Statement
We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.
Equal Opportunity Employer (EOE) Statement
Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
.
Inside Sales Executive
WHAT ARE WE LOOKING FOR - Minimum 6 months of experience in telesales / telecalling / inside sales.
- Comfortable working in a target-driven role & chasing daily / weekly numbers.
- Basic understanding of sales calls, follow-ups, and closures Confident verbal communication in Hindi and/or English
- Comfortable with cold calling & handling customer objections Open to learning and adapting in a fast-paced startup environment.
WHAT YOU WILL BE DOING - Conduct cold calls to travel agents to introduce StampMyVisa’s services.
- Drive activation & onboarding of agents onto the portal Conduct product demos Maintain and update call logs, activation trackers, and CRM entries.
- Collaborate with Operations & Accounts team to ensure seamless experience.
- Identify objections or adoption barriers and share insights to enhance the activation process.
- Promote and upsell key value-added services such as eSIM, travel insurance, and ongoing promotional offers
Founding Engineer - LITMAS
About LITMAS
LITMAS is revolutionizing litigation with the first AI-powered platform built specifically for elite litigators. We're transforming how attorneys research, strategize, draft, and win cases by combining comprehensive case repositories with cutting-edge AI validation and workflow automation. We are a team incubated by experienced litigators, building the future of legal technology.
The Opportunity
We're seeking a Founding Engineer to join our core team and shape the technical foundation of LITMAS. This is a rare opportunity to build a category-defining product from the ground up, working directly with the founders to create technology that will transform the US litigation market.
As a founding engineer, you'll have significant ownership over our technical architecture, product decisions, and company culture. Your code will directly impact how thousands of attorneys practice law.
What You'll Do
- Architect and build core platform features using Python, Node.js, Next.js, React, and MongoDB
- Design and implement production-grade LLM systems with advanced tool usage, RAG pipelines, and agent architectures
- Build AI workflows that combine multiple tools for legal research, validation, and document analysis
- Create scalable RAG infrastructure to handle thousands of legal documents with high accuracy
- Implement AI tool chains to provide agents tool inputs
- Design intuitive interfaces that make complex legal workflows simple and powerful
- Own end-to-end features from conception through deployment and iteration
- Establish engineering best practices for AI systems including evaluation, monitoring, and safety
- Collaborate directly with founders on product strategy and technical roadmap
The Ideal Candidate
You're not just an AI engineer, you're someone who understands how to build reliable, production-grade AI systems that users can trust. You've wrestled with RAG accuracy, tool reliability, and LLM hallucinations in production. You know the difference between a demo and a system that handles real-world complexity. You're excited about applying AI to transform how legal professional’s work.
What We're Looking For
Must-Haves
- Deployed production-grade LLM applications with demonstrable experience in:
- Tool usage and function calling
- RAG (Retrieval-Augmented Generation) implementation at scale
- Agent architectures and multi-step reasoning
- Prompt engineering and optimization
- Knowledge of multiple LLM providers (OpenAI, Anthropic, Cohere, open-source models)
- Background in building AI evaluation and monitoring systems
- Experience with document processing and OCR technologies
- 3+ years of production experience with Node.js, Python, Next.js, and React
- Strong MongoDB expertise including schema design and optimization
- Experience with vector databases (Pinecone, Weaviate, Qdrant, or similar)
- Full-stack mindset with ability to own features from database to UI
- Track record of shipping complex web applications at scale
- Deep understanding of LLM limitations, hallucination prevention, and validation techniques
Tech Stack
- Backend: Node.js, Express, MongoDB
- Frontend: Next.js, React, TypeScript, Modern CSS
- AI/ML: LangChain/LlamaIndex, OpenAI/Anthropic APIs, vector databases, custom AI tools
- Additional: Document processing, search infrastructure, real-time collaboration
What We Offer
- Significant equity stake true ownership in the company you're building
- Competitive compensation commensurate with experience
- Direct impact your decisions shape the product and company
- Learning opportunity work with cutting-edge AI and legal technology
- Flexible work remote-first with a global team
- AI resources access to latest models and compute resources
Interview Process
One more thing: Our process includes deep technical interviews and fit conversations. As part of the evaluation, there will be an extensive take-home test that should expect to take at least 4-5 hours depending on your skill level. This allows us to see how you approach real problems similar to what you'll encounter at LITMAS.
Exp: 4-6 years
Position: Backend Engineer
Job Location: Bangalore ( office near cubbon park - opp JW marriott)
Work Mode : 5 days work from office
Requirements:
● Engineering graduate with 3-5 years of experience in software product development.
● Proficient in Python, Node.js, Go
● Good knowledge of SQL and NoSQL
● Strong Experience in designing and building APIs
● Experience with working on scalable interactive web applications
● A clear understanding of software design constructs and their implementation
● Understanding of the threading limitations of Python and multi-process architecture
● Experience implementing Unit and Integration testing
● Exposure to the Finance domain is preferred
● Strong written and oral communication skills
LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
Job Responsibilities
Responsibilities for this position include, but are not limited to, the following.
Understand requirements and create low-level design using UML
Develop embedded software as per defined software requirements
Software integration & testing
Background & Skills
Education:
B.E/B. Tech/M.Tech/Master (Electronics/Telecommunications/Computers Science)OR equivalent
Experience & Attributes:
2-8 years’ experience in Embedded system software design, development, and testing.
Excellent communication skills, spoken and written English
Must have specialized knowledge :
· Embedded C
· Electronics
· RTOS
· Knowledge of Microcontrollers (RISC, CISC)
· CAN Communication & Protocols like UDS, KWP2000, CANopen and J1939
· MISRA standard - 2000/MISRA 2012
· SDLC, Agile Scrum
· Static analysis & Tools like LDRA or QAC or Vector cast
· Change Management & Tools like JIRA/VSTS
· Version control & Tools like SVN/GIT/Clearcase
· Traceability management & Tools like Reqtify or equivalent
· Design methodologies - like UML
· Software Test Life Cycle
Specialized knowledge – Will be Preferred
· Functional Safety life-cycle & Management - Applies to software design & development
· Safety standards like - IEC-61508, ISO-26262, ISO-25119, ISO-13849
Key Tasks and Responsibilities
· Software requirement specification writing
· Creating design for assigned modules
· Implementing the code for assigned requirements
· Perform unit testing
· Perform peer reviews or inspection of software work products
· Support testing team on defect analysis
· Adhere to Quality processes
ITSYS Solutions requires highly skilled and motivated developers with experience of 2-3 years having in-depth knowledge of PHP, XML/JSON and relational databases (MySQL) to work on its web scraping and big data analytics projects.
As a Senior PHP Developer, you will be responsible for
- Interacting and discuss with team-members to strategize solutions for the project.
- Interacting with clients to understand the project requirements and identify deliverables.
- Writing scalable, optimized, efficient and bug free PHP/MySQL code to handle large volumes of data.
- Managing projects and taking responsibility of client deliverables.
- Collaborating with team members and keep them informed about project schedules and status.
Desired Candidate Profile
- Have high proficiency in core PHP, Javascript, MySQL, XML, HTML & CSS.
- Extensive knowledge of relational databases (MySQL) including complex SQL queries.
- Experience of working with large databases.
- Knowledge of consuming and creating Web Services.
- Be innovative, curious and devoted towards technology.
- Have clear programming concepts.
- Have excellent communication skills.
- Have good analytical skills.
Key Skills:Css, Mysql, Html, Javascript, core php, php
Educational Qualification:BE/ B.Tech (Engineering), B.Sc. (Computer Science), BCA (Computer Application) (Computer Science)MCA/ PGDCA (Computer Science)
What we offer:
- A vibrant and energy filled work environment where passion and youth combine to deliver solutions to a global clientele.
- Personal growth with the company
- Regular team outs.
- Weekly off on Saturday & Sunday.
Job Type: Full-time
- 3+ years of experience with Ruby On Rails.
- Strong Project & Time Management Skills, along with the ability to apply these skills while working independently, or as part of a team.
- Knowledge of blockchain technology, smart contracts and cryptocurrency will be an added advantage
- Experience in fintech domain will be another added advantage
- Bachelor’s degree in computer programming, computer science, or a related field.
- Fluency or understanding of specific languages, such as Java, PHP, or Python, and operating systems may be required.











