
- Design and execute layouts/marketing assets for homepage features, landing pages, digital marketing, social media, site banners, packaging, product labels, newsletters, mobile, email blasts, etc.
- Compressing the banners and ensuring that the page load speed is correct.
- Producing and editing imagery and Motion graphics.
- Create artwork that is culturally related and intriguing the target audience to insure a high click rate and engagement.
- Coordinate with multiple departments and Work with Creative copywriter on all creative projects
- Integration of content programs with brand campaigns to drive brand to demand.
- Knowledge & experience of graphic design for products, website, advertisement, etc.
- Ensure that the customer journey is user friendly by creating visuals that are high in quality and clear to the customer
- Participate in brainstorming sessions for umbrella campaigns that will require market research and constant swot analysis.
- Focusing on building the brand name through creative artwork.
- The candidate must be excellent in motion graphics and video creation.
- Must have hands-on experience in photoshop and illustrator.
- Must be a team player with a creative mindset

About Sanguine Global
About
Connect with the team
Company social profiles
Similar jobs
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.


Job Description: Data Analyst
Position: Data Analyst
Location: Gurgaon
Experience Level: 1-3 Years
Employment Type: Full-Time
Role Overview
We are looking for a results-driven Data Analyst to join our team and support business decision-making through data insights and analytics. The ideal candidate will be highly proficient in Python, skilled in data visualization tools, and experienced in solving complex problems to drive measurable business outcomes such as revenue growth or cost reduction.
Key Responsibilities
Data Analysis and Insights:
Extract, clean, and analyze large datasets using Python to uncover trends and actionable insights.
Develop predictive models and conduct exploratory data analysis to support business growth and operational efficiency.
Business Impact:
Identify opportunities to increase revenue or reduce costs through data-driven strategies.
Collaborate with stakeholders to understand business challenges and provide analytics-driven solutions.
Data Visualization:
Build intuitive dashboards and reports using tools like Zoho Analytics, Looker Studio, or Tableau.
Present findings and insights clearly to both technical and non-technical stakeholders.
Problem-Solving:
Work on end-to-end problem-solving, from identifying issues to implementing data-backed solutions.
Continuously optimize processes through automation and advanced analytics techniques.
Collaboration and Reporting:
Work closely with teams across departments to understand data needs and deliver solutions tailored to their goals.
Provide ongoing reporting and insights to track key performance indicators (KPIs) and project outcomes.
Required Skills & Qualifications
Technical Expertise:
Strong proficiency in Python, including libraries such as Pandas, NumPy, Matplotlib, and Seaborn.
Hands-on experience with BI tools like Zoho Analytics, Looker Studio, or Tableau.
Analytical Skills:
Proven ability to analyze data to generate insights that drive decision-making.
Demonstrated success in addressing business challenges and achieving results such as revenue increment or cost reduction.
Problem-Solving:
Experience working on real-world business problems, identifying root causes, and implementing data-based solutions.
Communication:
Strong ability to communicate complex insights effectively to diverse audiences.
Excellent presentation and storytelling skills to translate data into actionable business strategies.
Preferred Qualifications
Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or related field.
Certifications in data analytics tools or platforms (e.g., Tableau, Looker).
Experience with advanced analytics or machine learning concepts.
What We Offer
Opportunity to work on impactful projects that directly influence business outcomes.
Collaborative, innovative, and supportive work environment.
Access to cutting-edge tools and technologies.
Competitive salary and growth opportunities.
Job Responsibilities:
Section 1 -
- Responsible for managing and providing L1 support to Build, design, deploy and maintain the implementation of Cloud solutions on AWS.
- Implement, deploy and maintain development, staging & production environments on AWS.
- Familiar with serverless architecture and services on AWS like Lambda, Fargate, EBS, Glue, etc.
- Understanding of Infra as a code and familiar with related tools like Terraform, Ansible Cloudformation etc.
Section 2 -
- Managing the Windows and Linux machines, Kubernetes, Git, etc.
- Responsible for L1 management of Servers, Networks, Containers, Storage, and Databases services on AWS.
Section 3 -
- Timely monitoring of production workload alerts and quick addressing the issues
- Responsible for monitoring and maintaining the Backup and DR process.
Section 4 -
- Responsible for documenting the process.
- Responsible for leading cloud implementation projects with end-to-end execution.
Qualifications: Bachelors of Engineering / MCA Preferably with AWS, Cloud certification
Skills & Competencies
- Linux and Windows servers management and troubleshooting.
- AWS services experience on CloudFormation, EC2, RDS, VPC, EKS, ECS, Redshift, Glue, etc. - AWS EKS
- Kubernetes and containers knowledge
- Understanding of setting up AWS Messaging, streaming and queuing Services(MSK, Kinesis, SQS, SNS, MQ)
- Understanding of serverless architecture. - High understanding of Networking concepts
- High understanding of Serverless architecture concept - Managing to monitor and alerting systems
- Sound knowledge of Database concepts like Dataware house, Data Lake, and ETL jobs
- Good Project management skills
- Documentation skills
- Backup, and DR understanding
Soft Skills - Project management, Process Documentation
Ideal Candidate:
- AWS certification with between 2-4 years of experience with certification and project execution experience.
- Someone who is interested in building sustainable cloud architecture with automation on AWS.
- Someone who is interested in learning and being challenged on a day-to-day basis.
- Someone who can take ownership of the tasks and is willing to take the necessary action to get it done.
- Someone who is curious to analyze and solve complex problems.
- Someone who is honest with their quality of work and is comfortable with taking ownership of their success and failure, both.
Behavioral Traits
- We are looking for someone who is interested to be part of creativity and the innovation-based environment with other team members.
- We are looking for someone who understands the idea/importance of teamwork and individual ownership at the same time.
- We are looking for someone who can debate logically, respectfully disagree, and can admit if proven wrong and who can learn from their mistakes and grow quickly

Egregore Labs (www.egregorelabs.com) is a financial software company founded in 2017 by Prashant Vijay (ISB, Tulane) & Hari Balaji (IIM Ahmedabad, IIT Madras) both of whom have spent over a decade each in Financial Services, with a majority of their experience at Goldman Sachs across New York, Hong Kong & Singapore in roles across Trading, Quant & Technology.
Opportunity
Full stack Developer (more front end)
Responsibilities:
Implement responsive and performant UIs with a user-centered approach with frontend technologies including React Js, Javascript(ES 6), Typescript, SCSS, etc
Build backend REST APIs on Python 3 based server frameworks for deployment and scaling of our product(s)
Write meaningful test cases for frontend & backend platforms
Integrate our products with 3rd party products/tools/services
Develop Infrastructure for delivering services using a performance-driven approach, build databases, schedule automated jobs, etc
Ideal Background / Experience
At least 24 months of diverse experience in web development for product or services-oriented environment with exposure to working production deployments
Expertise in programming using Python/Javascript or similar scripting languages
In-depth exposure to technologies used in web-based SaaS products, including REST APIs
Sound understanding of Postgres and NoSQL databases such as MongoDB
Nice to have exposure to any of
AWS
Azure
ELK
Object Relational Models (SQLAlchemy, etc)
Google APIs
Microservices Architecture Pattern
NodeJS / ExpressJS
Work from Home, office in Noida
Experience: 2yrs plus
Salary: ₹700,000.00 - ₹1,200,000.00 per year
Responsibilities:
- Designing and implementing Java-based applications.
- Analyzing user requirements to inform application design.
- Defining application objectives and functionality.
- Aligning application design with business goals.
- Developing and testing software.
- Debugging and resolving technical problems that arise.
- Producing detailed design documentation.
- Recommending changes to existing Java infrastructure.
- Developing multimedia applications.
- Developing documentation to assist users.
- Ensuring continuous professional self-development.
Requirements
- Degree in Computer Science or related field.
- Experience with user interface design, database structures, and statistical analyses.
- Analytical mindset and good problem-solving skills.
- Excellent written and verbal communication.
- Good organizational skills.
- Skills required- Java , Springboot , Hibernate , Database
The Role:
You are a Backend Engineer passionate about building world-class mobile and web applications with performant backend and glitch-free experience. You will be part of a team which will be delivering technology that enhances the in-app experience for our users and enables our development teams to build mobile apps easier, faster and more efficient. You will build compelling and engaging applications for Web and Mobile platforms. Employing your experience in latest Nodejs frameworks like Express.js,Hapi.js, Yarn, PM2 and proficiency in Elasticsearch, in building RESTful APIs, in integrating with databases like MongoDB and MySQL/Postgres with discipline in collaboration and pair programming. Our clientele majorly comprises of BFSI companies, so experience in Financial Applications and enterprise data security will be a big-plus.
Responsibilities:
• Contribute to an Agile team to build web and mobile applications, APIs, SDKs and other tools as required
• Collaborate with various teams within IORTA to realize the requirements for the project and Rapidly deliver iterative solutions
• Provide task plans and follow trends in technology and suggest new approaches to application design and development
• Review and evaluate designs for compliance with development guidelines
•. Implement best practices and methods to improve the development process within the
team and organization
Embedded c
C
Iso26262
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com

