
We are seeking an enthusiastic Email Marketing Executive to join our digital marketing team. In this role, you will be responsible for generating high-quality leads for SEO, social media marketing, and website design & development services, specifically targeting the US and Canada markets. Your goal will be to drive lead generation efforts that contribute to increased sales revenue for the company.
Key Responsibilities
· Generate potential client through email or LinkedIn.
· Basic understanding of SEO and how it intersects with email marketing
· Must have worked on email marketing template creation, sending bulk emails.
· Managing the contact database and assist with lead generation activities.
· Have to target the USA, Canada market.
· Should have knowledge about emailing.
Experience: 0.6 – 4 years
Skills:
· Minimum high school diploma in any stream.
· At least 6 months of experience working in a digital marketing company.
· Proficiency in generating leads for SEO, Social Media Marketing, and Website Design and Development services.
· Experience in lead generation specifically targeting audiences in the US, Canada, and UK markets.
Working Days: Monday-Friday
Job Type: Full-time
Education: Secondary(10th Pass) (Preferred)

About Thynk Google Media
About
Thynk Google Media is a dynamic digital marketing agency dedicated to helping brands grow online with innovative, results-driven strategies. Our team specializes in SEO, social media marketing, content creation, PPC advertising, and website management, tailoring solutions to each client’s unique goals. We focus on maximizing ROI and enhancing brand visibility across digital channels, consistently delivering impactful results. With an emphasis on transparency, data-driven strategies, and customer-centric approaches, Thynk Google Media partners with clients to build lasting digital success.
Similar jobs
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
CTC: up to 20 LPA
Required Skills:
- Strong experience in SAP EWM Technical Development.
- Proficiency in ABAP (Reports, Interfaces, Enhancements, Forms, BAPIs, BADIs).
- Hands-on experience with RF developments, PPF framework, and queue monitoring.
- Understanding of EWM master data, inbound/outbound processes, and warehouse tasks.
- Experience with SAP integration technologies (IDoc, ALE, Web Services).
- Good analytical, problem-solving, and communication skills.
Nice to Have:
- Exposure to S/4HANA EWM.
- Knowledge of Functional EWM processes.
- Experience in Agile / DevOps environments.
If interested kindly share your updated resume on 82008 31681
|
Looking for a techno-functional experience in SAP MDG for the following data objects: material, vendor, customer, GL, PC,CC with the core expertise in data replication using Web services method and well verse with migration processes.1. Key MDG Objects: Material master data Customer master data Vendor master data 2. Data migration Coordinate migration activities such as data mappings, data replication, data migration, preliminary data validation, coordinate bug fixing with migration 3. Communication Outstanding data migration skills from technical perspective Proven experience in SAP integrations from one SAP S/4 HANA to another Its mandatory for the consultant to have replication experience for large volume of data using web service |
autonomous world.Rich data in large volumes is getting collected at the edge (outside a datacenter) in use cases like autonomous vehicles, smart manufacturing, satellite imagery, smart retail, smart agriculture etc.These datasets are characterized by being unstructured
(images/videos), large size (Petabytes per month), distributed (across edge, on-prem and
cloud) and form the input for training AI models to get to higher degrees of automation.
Akridata is engaged with building products that solve these unique challenges and be at the forefront of this edge data revolution.
The company is backed by prominent VCs and has it’s entire software engineering team
based out of India and provides ample opportunities for from-scratch design and
development.
Role:
This role is an individual contributor role with key responsibilities in developing web server
backends for Akridata management plane software that provides a ‘single pane of glass’ for users to manage assets, specify and monitor large volume data pipelines at scale involving 10s of petabytes of data.
This role involves:
1. Working with tech leads and the rest of the team on the feature design activities and
picking appropriate tools and techniques for implementation.
2. Be a hands-on developer able to independently make correct implement choices, follow
sound development practices to ensure an enterprise grade application.
3. Guide and mentor junior team members.
What we are looking for:
1. A Bachelor’s or Master’s degree in computer science with strong CS fundamentals and
problem solving.
2. 5+ years of hands-on experience with software development with 3+ years on web
backend development.
3. A good understanding of backend application interactions with relational databases like
MySQL, Postgres etc
4. Knowledge of web server development frameworks preferably on Python.
5. Enthusiastic to work in a dynamic, fast paced startup environment.
Good to have:
1. Hands-on experience with designing database schema and implementing and debugging SQL queries for optimal performance for large datasets
2. Experience working with applications deployed on Kubernetes clusters.
3. Experience with working on a product from early stages of it’s development typically in a
startup environment.

SharePoint site as well as Excel files from local HRIS systems
Analyze data on a monthly, quarterly, and annual basis regarding recruitment and
attrition
Prepare data reports using Excel and Power Point
Ensure data integrity by identifying issues and fixing errors either in SharePoint or local
HRIS system
Track recruitment costs on a regional and global basis
Work closely with Finance team to monitor global recruitment budget costs and new
hire/attrition pipeline
Lead HR data projects, such as data review/audit and mass data changes
Assist HR team with various data projects, as requested
Required Skills
Intermediate Microsoft Office skills, including Excel, Power Point and Word
Strong attention to detail
Excellent verbal and written communication skills
Ability to build strong working relationships across different locations and departments
Strong analytics and problem solving – able to identify anomalies in data and follow up
to find an explanation as to why
Must maintain confidentiality among the
- Working on test-driven development.
- Working on continuous integration and continuous development pipeline.
- Identify target audience and grow our email list
- Design and implement direct email marketing campaigns
- Proofread emails for clarity, grammar and spelling
- Ensure mobile-friendly email templates
- Write newsletters including all company updates
- Upgrade our email templates using graphics, personalization and advanced features
- Ensure prompt and accurate communication with clients via email to minimize unsubscribes
- Create email databases for lead generation
We are looking for a full-time "Front-End Developer"
Job Description:
We are looking for a skilled Front-End developer. In this role, you will be responsible for developing and implement user interface components using concept of react. Js angular 4 Or above.
Requirements:
- Previous experience working as a react. Js developer.
- In-depth knowledge of JavaScript, CSS, HTML.
- Knowledge of REACT tools including React.Js, Webpack .
- Experience with browser-based debugging and performance testing software.
- Excellent troubleshooting skills.
Experience: 0 to 2 years.
Location: Patna/Jaipur (Work from Home)
Salary: As per market standards
#developer #experience #hiring #jobs #agrix #helpingothers #tagafriend #frontend
Required Qualifications and Skills:
- 3-5 years of work experience in the development background, with at least 2 years experience in Java, Spring, Sprint Boot, Hibernate or JPA, MySQL, Oracle, Spring MVC.
- B.E. degree in Computer Science, Graduate in Software Engineering or equivalent
- Experience in Core JAVA, Spring, Spring Boot Frameworks.
- Experience with ORM's like Hibernate.
- Good knowledge of developing RESTful web services using Spring Boot, Java1.x,Servlet2.4, JSP2.0, JDBC3.0, Java Mail, Struts2.x, HTML, HTML5, Angular7+, JavaScript, JSF, Bootstrap2.x-3.x, JQuery & CSS 3.x, Maven 3.x, Apache Tomcat7
- Knowledge of Cloud AWS.
- Experience in any Messaging Queue e.g. Apache Kafka, ActiveMQ, etc.
- Experience on Web services with REST and SOAP.
- Experience working on tool set like Eclipse IDE, SQL clients.
- Experience using application server like Jboss, TomCat, Wildfly, glassfish.
- Experience in using tools like SOAP UI, POSTMAN
- Ability to write SQL queries to fetch data.
- Knowledge of Micro services, Redis Cache and Mongo DB (or any other NoSQL) is good to have








