Cutshort logo
Web content Jobs in Pune

11+ Web content Jobs in Pune | Web content Job openings in Pune

Apply to 11+ Web content Jobs in Pune on CutShort.io. Explore the latest Web content Job opportunities across top companies like Google, Amazon & Adobe.

icon
Pangolin Marketing

at Pangolin Marketing

2 recruiters
Debarshi Chowdhury
Posted by Debarshi Chowdhury
Remote, Pune, Kolkata, Kochi (Cochin), Hyderabad, Chennai, Shillong, Lucknow, Mumbai, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Indore, Ahmedabad, Jaipur, Ranchi, Raipur, Goa, Chandigarh
3 - 6 yrs
₹2.5L - ₹6L / yr
Content Writing
Content Marketing
Content Strategy
Web content
Creative Writing
+3 more

About:

Pangolin is a brand consultancy and creative agency. We help companies create positive change in the world by raising awareness through branding and inspiring through storytelling. We’re growing our team of creatives and strategists. Join us if you’re craving work that makes a real difference in the world, and a work culture that allows you to have a balanced life. 


Role:

Creative Content Writer - B2B/B2C


Description:

Our clients have stories that need telling, we’re looking for storytellers who weave the right narrative for them. Are you comfortable telling stories of social impact, societal change, industrial evolution, and technological innovation? Are you well-versed with corporate communication, writing content for B2B marketing and sales, and enterprise content marketing? Have you worked on a large variety of content types - emails, brochures, flyers, social media content, website copy, sales presentations, webinar decks, articles, and whitepapers? Then you might be a good fit for this job.


Competencies:

  • Ability to work on multiple projects simultaneously, effectively juggle diverse writing demands, and deliver on-time
  • High retention capacity and the ability to understand client briefs to create effective content strategies.
  • An unerring eye for detail and a knack for creativity. 

Skills:

  • Ability to interpret data to make recommendations for optimizing online content marketing.
  • Unique, engaging, and varied writing styles with good proofreading and research skills.
  • Knowledge of keywords, meta tags, SEO, and basic writing terminologies.

Experience:

  • Understands marketing KPIs and ROI generated from content marketing programs, including an understanding of how content sources influence audience acquisition pipelines.
  • Experienced in writing blogs, product portfolios, reports, web content, brochures, and other marketing collateral for B2B/B2C clients.
  • Has a solid foundation in SEO, content strategy, analytics, copywriting, and copy editing.

Location and Type: 

Remote, Full-time


Evaluation:

Portfolios will be checked for expertise in both B2B and B2C domains. Please share only samples of work done by yourself for actual clients. Share only creative work done for an actual company, strictly no SEO blogs or personal blogs. . Shortlisting will be followed by a video interview.

Read more
A top MNC in Pune.

A top MNC in Pune.

Agency job
via Jobdost by Sathish Kumar
Pune
4 - 10 yrs
₹7L - ₹20L / yr
skill icon.NET
ASP.NET
skill iconC#
Roles & Responsibilities
  •  Development using .net core
  •  Requirement understanding and getting on client calls
  •  Work closely with the nearshore developer
  •  Perform code reviews and unit testing as planned
  •  Participate in peer reviews.

Must have
  •  Strong knowledge on C#, .net core 5 and Entity Framework
  •  Knowledge on PostgreSQL
  •  Web API, building Microsoft .NET-based web or Enterprise applications
  •  Experience in building and consuming Asp.NET MVC & Web API or REST API  using 
  • jQuery, JSON, AJAX, Asp.net Web Services.

Good to have
  •  Knowledge on Docker and Kubernetes
  •  Knowledge on AutoMapper
Read more
AdElement

at AdElement

2 recruiters
Ritisha Nigam
Posted by Ritisha Nigam
Pune
2 - 7 yrs
₹5L - ₹15L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
AiML

Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)

Location: Pune – Onsite

About Us:

We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!

Role Overview:

We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.

Key Responsibilities:

  • Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
  • Collaborate with data scientists and engineers to build scalable AI-driven solutions.
  • Analyze large volumes of data to extract meaningful insights and improve ad performance.
  • Develop and deploy machine learning pipelines for automated decision-making.
  • Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
  • Optimize existing models for speed, scalability, and accuracy.
  • Work closely with product managers to align AI solutions with business goals.

Requirements:

  • Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
  • Strong programming skills in Python, R, or similar languages.
  • Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
  • Expertise in data processing and real-time analytics.
  • Strong understanding of digital advertising, programmatic platforms, and ad server technology.
  • Excellent problem-solving and analytical skills.
  • Immediate joiners preferred.

Preferred Skills:

  • Knowledge of big data technologies like Spark, Hadoop, or Kafka.
  • Experience with cloud platforms like AWS, GCP, or Azure.
  • Familiarity with MLOps practices and tools.

How to Apply:

If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.

Join us in building the future of AI-driven digital advertising!

Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Mumbai, Pune, Chennai, Gurugram
5.6 - 7 yrs
₹10L - ₹28L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL

Job Summary:

As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.

Key Responsibilities:

  • Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
  • Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
  • Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
  • Work with AWS DMS and RDS for database integration and migration
  • Optimize data flows and system performance for speed and cost-effectiveness
  • Deploy and manage infrastructure using AWS CloudFormation templates
  • Collaborate with cross-functional teams to gather requirements and build robust data solutions
  • Ensure data integrity, quality, and security across all systems and processes

Required Skills & Experience:

  • 6+ years of experience in Data Engineering with strong AWS expertise
  • Proficient in Python and PySpark for data processing and ETL development
  • Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
  • Strong SQL skills for building complex queries and performing data analysis
  • Familiarity with AWS CloudFormation and infrastructure as code principles
  • Good understanding of serverless architecture and cost-optimized design
  • Ability to write clean, modular, and maintainable code
  • Strong analytical thinking and problem-solving skills


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shikha Srivastav
Posted by Shikha Srivastav
Pune
8 - 16 yrs
Best in industry
Zookeeper
kafka

Job Title: Kafka Architect

Experience range:- 8+ Years

Location:- Pune

Experience :

Kafka Architect with kafka connect, kafka Streaming, Ecosystem and any scripting language

Kafka Brokers, Kafka Connect, Schema Registry, and Zookeeper (or KRaft)

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Mumbai, Bengaluru (Bangalore)
4 - 9 yrs
Best in industry
Rancher
skill iconKubernetes
K8s
DevOps
Puppet

Job Summary:

We are seeking a skilled DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for managing, maintaining, and troubleshooting Rancher clusters, with a strong emphasis on Kubernetes operations. This role requires expertise in automation through shell scripting and proficiency in configuration management tools like Puppet and Ansible. Candidates should be highly self-motivated, capable of working on a rotating schedule, and committed to owning tasks through to delivery.

Key Responsibilities:

  • Set up, operate, and maintain Rancher and Kubernetes (K8s) clusters, including on bare-metal environments.
  • Perform upgrades and manage the lifecycle of Rancher clusters.
  • Troubleshoot and resolve Rancher cluster issues efficiently.
  • Write, maintain, and optimize shell scripts to automate Kubernetes-related tasks.
  • Work collaboratively with the team to implement best practices for system automation and orchestration.
  • Utilize configuration management tools like Puppet and Ansible (preferred but not mandatory).
  • Participate in a rotating schedule, with the ability to work until 1 AM as required.
  • Take ownership of tasks, ensuring timely delivery with high-quality standards.

Key Requirements:

  • Strong expertise in Rancher and Kubernetes operations and maintenance.
  • Experience in setting up and managing Kubernetes clusters on bare-metal systems is highly desirable.
  • Proficiency in shell scripting for task automation.
  • Familiarity with configuration management tools like Puppet and Ansible (good to have).
  • Strong troubleshooting skills for Kubernetes and Rancher environments.
  • Ability to work effectively in a rotating schedule and flexible hours.
  • Strong ownership mindset and accountability for deliverables.


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 4 yrs
₹8L - ₹20L / yr
skill iconPython
PySpark
ETL
databricks
Azure
+6 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe. 

 

 

We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English. 

 

 

We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives. 

 

 

Skills Required 

  • Experience in the manufacturing industry (metal industry is a plus)  
  • 2+ years of experience as a Data Engineer 
  • Experience in data cleaning & structuring and data manipulation 
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines. 
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation. 
  • Experience in SQL and data structures  
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases. 
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform. 
  • Proficient in data management and data governance  
  • Strong analytical and problem-solving skills. 
  • Excellent communication and teamwork abilities. 

 


Nice To Have 

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database). 
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud. 


Read more
Proounce solutions
Shivang Mathur
Posted by Shivang Mathur
Hyderabad, Bengaluru (Bangalore), Lucknow, Noida, Delhi, Gurugram, Ghaziabad, Faridabad, Chennai, Mumbai, Pune
1 - 4 yrs
₹4L - ₹10L / yr
PL/SQL
Oracle SQL Developer
Oracle DBA

·      Problem-Solving Skills, should be able to convert Idea on Paper to Code

·      Bachelor’s degree in computer science or related field, or equivalent professional experience.

·      0 - 4 years of database experience (Oracle SQL, PL/SQL)

·      Proficiency in Oracle, with Hands-on experience in database design

·      Creation & implementation of data models.

·      Strong experience with Oracle functions, procedures, triggers, packages.

·      Willing to learn, grasp & quickly adapt needed cutting-edge tools & technologies in shorter timeframe.

·      Should be able to write basic Procedures & Functions.

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Shivani Mahale
Posted by Shivani Mahale
Pune
4 - 7 yrs
₹5L - ₹15L / yr
ETL
Informatica PowerCenter
Teradata
Data Warehouse (DWH)
IBM InfoSphere DataStage
Requirement -
  • Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
  • Should have extensive knowledge in Unix shell scripting.
  • Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
  • Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
  • Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
  • Should be good in writing complex SQL queries.
Opportunities-
  • • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
  • Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing.
Read more
Eilisys Technologies

at Eilisys Technologies

1 recruiter
Rahul Inamdar
Posted by Rahul Inamdar
Pune
3 - 10 yrs
₹6L - ₹10L / yr
skill icon.NET
skill iconAngular (2+)
MS SQLServer
ASP.NET
Full-stack developer on .NET and Angular 4/5 Build products as per new product road-map Help scale the existing products on the .net platform work on reducing server load by implementing new algorithms for data processing Work closely with the Product Architect on SDLC model Experience in HCM or HRMS would be an added advantage
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort