- Should have a good experience selling IT sales services in the USA/Europe region
- Should have knowledge of IT services like Web services, Cloud Services, Mobile Technology.
- Experience working in B2B Sales
- Create and implement sales strategies in line with the company's growth objectives
- Managing prospective client relationships through all phases of the sales cycle
- Developing and maintaining strong professional relationships with global accounts, to maximize sales
- The ability to write reports and proposals
- Comfortable working in different time zones to connect with our target audience
- Experience working with end-to-end sales from lead generation to lead conversion.
- Proven track record for closing new deals with both large and mid-size companies
- Knowledge of Demand generation
- Lead generation skills for IT services projects
- Develop and execute strategies to expand inbound lead/demand generation
- Must have good working experience in Upwork, Freelance.com, PeoplePerHour
- Good communication skills verbal as well as written is must

About Whitesnow Software Consultancy Pune
About
Connect with the team
Similar jobs
Experience - 3+ years
Location - Indore /Chennai [WFO]
Technical Skills
- Hands-on Collibra Developer experience (3+ yrs) and knowledge of data governance.
- Strong knowledge of Collibra Asset Model, Collibra Operating Model, roles, and responsibilities
- Experience with Collibra Dashboards, Collibra Views, Collibra Lineage, HTML editing
- SQL expertise for building Collibra DQ Rules and data quality automation
Key Responsibilities
- Develop, customize, and implement end-to-end Collibra workflows, including
- Collibra development - workflows (BPMN), dashboards, views, interfaces
- Collibra integration - Java, Spring Boot, APIs, ESB/MuleSoft
- Collibra Data Quality (DQ) - writing SQL rules, DQ scorecards, DQ monitoring
- Metadata management & data governance - asset models, communities, domains, lineage
Job Role: Sr. Data Engineer
Location: Navrangpura, Ahmedabad
WORK FROM OFFICE - 5 DAYS A WEEK (UK Shift)
Job Description:
• 5+ years of core experience in python & Data Engineering.
• Must have experience with Azure Data factory and Databricks.
• Exposed to python-oriented Algorithm’s libraries such as NumPy, pandas, beautiful soup, Selenium, pdfplumber, Requests etc.
• Proficient in SQL programming.
• Knowledge on DevOps like CI/CD, Jenkins, Git.
• Experience working with Azure Databricks.
• Able to co-ordinate with Teams across multiple locations and time zones
• Strong interpersonal and communication skills with an ability to lead a team and keep them motivated.
Mandatory Skills : Data Engineer - Azure Data factory, Databricks, Python, SQL/MySQL/PostgreSQ
Key Responsibilities
- Data Architecture & Pipeline Development
- Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Integrate structured, semi-structured, and unstructured data from multiple sources.
- Data Storage & Management
- Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
- Ensure proper indexing, partitioning, and storage optimization for performance.
- Data Governance & Security
- Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
- Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
- Collaboration & Stakeholder Engagement
- Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
- Provide technical guidance and best practices for data integration and transformation.
- Monitoring & Optimization
- Set up monitoring and alerting for data pipelines.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled DevOps Service Desk Specialist to join our global DevOps team. As a DevOps Service Desk Specialist, you will be responsible for managing service desk operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing 24/7 support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
On-Call Duty: staffing of a 24/7 on-call service, handling of incidents outside normal office hours, escalation to and coordination with the onsite team, customers and 3rd parties where necessary.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
Technical experience in Java Enterprise environment as developer or DevOps specialist.
Familiarity with DevOps principles and ticketing tools like ServiceNow.
Strong analytical, communication, and organizational abilities. Easy to work with.
Strong problem-solving, communication, and ability to work in a fast-paced 24/7 environment.
Optional: Experience with our relevant business domain (Automotive industry). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
Java Enterprise, DevOps, Service Desk, Incident Management, Change Management, Problem Management, System Monitoring, Troubleshooting, Automation Workflows, CI/CD, Jenkins, GitLab, Docker, Kubernetes, AWS, Azure, ServiceNow, Root Cause Analysis, Documentation, Communication, On-Call Support, ITIL, SCRUM, Automotive Industry.
- Building interactive consumer data from multiple systems and RESTfully abstracting to the UI through a Node.js backend
- Define code architecture decisions to support a high-performance and scalable product with a minimal footprint
- Address and improve any technical issues
- Collaborate well with engineers, researchers, and data implementation specialists to design and create advanced, elegant and efficient systems
Hiring for GCP compliant cloud data lake solutions for clinical trials for US based pharmaceutical company.
Summary
This is a key position within Data Sciences and Systems organization responsible for data systems and related technologies. The role will part of Amazon Web service (AWS) Data Lake strategy, roadmap, and AWS architecture for data systems and technologies.
Essential/Primary Duties, Functions and Responsibilities
The essential duties and responsibilities of this position are as follows:
- Collaborate with data science and systems leaders and other stakeholders to roadmap, structure, prioritize and execute on AWS data engineering requirements.
- Works closely with the IT organization and other functions to make sure that business needs and requirements, IT processes, and regulatory compliance requirements are met.
- Build the AWS infrastructure required for optimal extraction, transformation, and loading of data from a vendor site clinical data sources using AWS big data technologies
- Create and maintain optimal AWS data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product
- Work with data and analytics experts to strive for greater functionality in our data systems
Requirements
- A minimum of a bachelors degree in a Computer Science, Mathematics, Statistics or related discipline is required. A Master's degree is preferred. A minimum of 6-8 years technical management experience is required. Equivalent experience may be accepted.
- Experience with data lake and/or data warehouse implementation is required
- Minimum Bachelors Degree in Computer Science, Computer Engineering, Mathematical Engineering, Information Systems or related fields
- Project experience with visualization tools (AWS, Tableau, R Studio, PowerBI, R shiny, D3js) and databases. Experience with python, R or SAS coding is a big plus.
- Experience with AWS based S3, Lambda, Step functions.
- Strong team player and you can work effectively in a collaborative, fast-paced, multi-tasking environment
- Solid analytical and technical skill and the ability to exchange innovative ideas
- Quick learner and passionate about continuously developing your skills and knowledge
- Ability to solve problems by using AWS in data acquisitions
- Ability to work in an interdisciplinary environment. You are able to interpret and translate very abstract and technical approaches into a healthcare and business-relevant solution
Collaborate with the CIO on application Architecture and Design of our ETL (Extract, Transform,
Load) and other aspects of Data Pipelines. Our stack is built on top of the well-known Spark
Ecosystem (e.g. Scala, Python, etc.)
Periodically evaluate architectural landscape for efficiencies in our Data Pipelines and define current
state, target state architecture and transition plans, road maps to achieve desired architectural state
Conducts/leads and implements proof of concepts to prove new technologies in support of
architecture vision and guiding principles (e.g. Flink)
Assist in the ideation and execution of architectural principles, guidelines and technology standards
that can be leveraged across the team and organization. Specially around ETL & Data Pipelines
Promotes consistency between all applications leveraging enterprise automation capabilities
Provide architectural consultation, support, mentoring, and guidance to project teams, e.g. architects,
data scientist, developers, etc.
Collaborate with the DevOps Lead on technical features
Define and manage work items using Agile methodologies (Kanban, Azure boards, etc) Leads Data
Engineering efforts (e.g. Scala Spark, PySpark, etc)
Knowledge & Experience
Experienced with Spark, Delta Lake, and Scala to work with Petabytes of data (to work with Batch
and Streaming flows)
Knowledge of a wide variety of open source technologies including but not limited to; NiFi,
Kubernetes, Docker, Hive, Oozie, YARN, Zookeeper, PostgreSQL, RabbitMQ, Elasticsearch
A strong understanding of AWS/Azure and/or technology as a service (Iaas, SaaS, PaaS)
Strong verbal and written communications skills are a must, as well as the ability to work effectively
across internal and external organizations and virtual teams
Appreciation of building high volume, low latency systems for the API flow
Core Dev skills (SOLID principles, IOC, 12-factor app, CI-CD, GIT)
Messaging, Microservice Architecture, Caching (Redis), Containerization, Performance, and Load
testing, REST APIs
Knowledge of HTML, JavaScript frameworks (preferably Angular 2+), Typescript
Appreciation of Python and C# .NET Core or Java Appreciation of global data privacy requirements
and cryptography
Experience in System Testing and experience of automated testing e.g. unit tests, integration tests,
mocking/stubbing
Relevant industry and other professional qualifications
Tertiary qualifications (degree level)
We are an inclusive employer and welcome applicants from all backgrounds. We pride ourselves on
our commitment to Equality and Diversity and are committed to removing barriers throughout our
hiring process.
Key Requirements
Extensive data engineering development experience (e.g., ETL), using well known stacks (e.g., Scala
Spark)
Experience in Technical Leadership positions (or looking to gain experience)
Background software engineering
The ability to write technical documentation
Solid understanding of virtualization and/or cloud computing technologies (e.g., docker, Kubernetes)
Experience in designing software solutions and enjoys UML and the odd sequence diagram
Experience operating within an Agile environment Ability to work independently and with minimum
supervision
Strong project development management skills, with the ability to successfully manage and prioritize
numerous time pressured analytical projects/work tasks simultaneously
Able to pivot quickly and make rapid decisions based on changing needs in a fast-paced environment
Works constructively with teams and acts with high integrity
Passionate team player with an inquisitive, creative mindset and ability to think outside the box.
Job Description:
- The resource needs to have experience of working in Exalogic admin activities and Oracle products (Oracle Banking Platform, Exalogic & Weblogic).
- Technical role where he/she has to work on day to day assigned tasks
- Schedules and organizes work assignments and help out in managing client's demands to ensure their satisfaction.
Preferred Skills:
- Few of the activities are listed below
- Operational Support for 3 Exalogic Racks with
- Exalogic Infrastructure patching/upgrade (Compute Node, ILOM, Control Stack, IB Switch, ZFS Storage)
- 140+ VM’s spreading across DEV/QV/PROD/DR/IT to support along with WebLogic
- OEM12c/13c is used to monitor Exalogic machines
- Platinum support patching 2 times per year
- Exalogic ZFS storage management & advisory for optimization








