Job Description:
- The Role Requires the Person to hunt and open new accounts for the Company’s SW and platform Solution offerings, Land & Expand, and Manage Existing Accounts and Relationships to grow business & Increase share of wallet with a clear focus on high Client Satisfaction
- Complete Ownership of Salethe s Cycle
- To be Successful, the candidate should have experience in Software/Consultative Solution Sales, should have done concept selling,
- Should have an understanding of IT Landscape in an enterprise, understanding of the Logistics Supply chain could be an added advantage
- The role requires the person to be quick thinker, an innovative problem solver and the ability to think on his feet will come in good stead.
Responsibilities
- Hunting New Accounts
- Business Development
- Enterprise Account & Sales Management – grow SOW
- Manage the entire sales cycle
- Maintain High level of customer satisfaction
- Pipeline management, Accurate Sales forecasting, Sales Negotiations and Business Closure.
- Exceeding Monthly and Quarterly Sales Targets

About Griffon Technology
About
Connect with the team
Similar jobs
What You’ll Be Doing:
● Own the architecture and roadmap for scalable, secure, and high-quality data pipelines
and platforms.
● Lead and mentor a team of data engineers while establishing engineering best practices,
coding standards, and governance models.
● Design and implement high-performance ETL/ELT pipelines using modern Big Data
technologies for diverse internal and external data sources.
● Drive modernization initiatives including re-architecting legacy systems to support
next-generation data products, ML workloads, and analytics use cases.
● Partner with Product, Engineering, and Business teams to translate requirements into
robust technical solutions that align with organizational priorities.
● Champion data quality, monitoring, metadata management, and observability across the
ecosystem.
● Lead initiatives to improve cost efficiency, data delivery SLAs, automation, and
infrastructure scalability.
● Provide technical leadership on data modeling, orchestration, CI/CD for data workflows,
and cloud-based architecture improvements.
Qualifications:
● Bachelor's degree in Engineering, Computer Science, or relevant field.
● 8+ years of relevant and recent experience in a Data Engineer role.
● 5+ years recent experience with Apache Spark and solid understanding of the
fundamentals.
● Deep understanding of Big Data concepts and distributed systems.
● Demonstrated ability to design, review, and optimize scalable data architectures across
ingestion.
● Strong coding skills with Scala, Python and the ability to quickly switch between them with
ease.
● Advanced working SQL knowledge and experience working with a variety of relational
databases such as Postgres and/or MySQL.
● Cloud Experience with DataBricks.
● Strong understanding of Delta Lake architecture and working with Parquet, JSON, CSV,
and similar formats.
● Experience establishing and enforcing data engineering best practices, including CI/CD
for data, orchestration and automation, and metadata management.
● Comfortable working in an Agile environment
● Machine Learning knowledge is a plus.
● Demonstrated ability to operate independently, take ownership of deliverables, and lead
technical decisions.
● Excellent written and verbal communication skills in English.
● Experience supporting and working with cross-functional teams in a dynamic
environment.
REPORTING: This position will report to Sr. Technical Manager or Director of Engineering as
assigned by Management.
EMPLOYMENT TYPE: Full-Time, Permanent
SHIFT TIMINGS: 10:00 AM - 07:00 PM IST
Job Title: Tech Lead and SSE – Kafka, Python, and Azure Databricks (Healthcare Data Project)
Experience: 4 to 12 years
Role Overview:
We are looking for a highly skilled Tech Lead with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing. This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams.
Key Responsibilities:
- Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks.
- Architect scalable data streaming and processing solutions to support healthcare data workflows.
- Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data.
- Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.).
- Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions.
- Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows.
- Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering.
- Stay updated with the latest cloud technologies, big data frameworks, and industry trends.
Required Skills & Qualifications:
- 4+ years of experience in data engineering, with strong proficiency in Kafka and Python.
- Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing.
- Experience with Azure Databricks (or willingness to learn and adopt it quickly).
- Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus).
- Proficiency in SQL, NoSQL databases, and data modeling for big data processing.
- Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications.
- Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus.
- Strong analytical skills, problem-solving mindset, and ability to lead complex data projects.
- Excellent communication and stakeholder management skills.
-Work with other company's leaders to design and drive the plan for company's vision.
-Recruit great engineers, in collaboration with company's recruiting team.
-Develop engineers on the team, helping them advance in their careers.
-Empower the engineering team to achieve a high level of technical productivity, reliability and
simplicity.
-Contribute to engineering-wide initiatives as a member of company's engineering management team.
Requirements:
-Bachelor's degree in Computer Science or related technical discipline (Tier I colleges preferable).
-A track record of leading productive engineering teams.
-Successfully recruited great people to your teams.
-The ability to thrive on a high level of autonomy and responsibility.
-The desire to encourage a healthy work environment that's both supportive and challenging.
-A genuine excitement to help engineers develop new skills and advance in their careers.
-Enough technical sense to ask engineers good questions about architecture and product decisions.
-7+ years of relevant engineering and hands-on technical management experience.
-Previous experience at good tech startups/product companies.
-Strong hands-on experience in full stack development and architecture using NodeJS, MVC,
microservices, UI & UX (React), ORM tool (Sequelize), CICD tools (Git, Jenkins, AWS).
-Excellent knowledge of RDBMS, SQL, NoSQL, Cache, Linux based systems.
About Us:
Developed in formal collaboration with the University of Cambridge in May 2000, HeyMath! is an Ed-Tech company whose mission is to Raise the Game in Maths for school systems around the world. We do this using technology to deliver engaging teaching methodologies and personalised learning paths for students. HeyMath! has been successfully adopted by CBSE schools since 2004, with positive outcomes for the entire ecosystem.
Check us out at www.heymath.com
We plan to work mainly from home in 2022 and the virtual office atmosphere is collegiate, informal and friendly, with small high-impact teams making a difference to customers.
What we are looking for:
Experience in building and re-engineering cloud based solutions on AWS.
Strong knowledge of Object Oriented Programming(OOPS) and design patterns is a must. Hands-on development on Spring MVC framework.
Experience working on Java 8 or above.
Must have very good knowledge of RDBMS such as MySQL and performance tuning of the same.
Exposure to server-side and client-side caching mechanisms. Ability to debug the applications and provide instant workable solutions.
Experience working on Elastic Search / Kafka / Kubernetes or all is a nice to have.
- Proven software development experience and iOS skills development
- Proven working experience in iOS app development and
- Have published at least one original iOS app
- Experience with iOS SDK
- Experience working with remote data via REST and JSON
- Experience with third-party libraries and APIs
- Working knowledge of the general mobile landscape, architectures, trends, and emerging technologies
- Experienced in SWIFT Language
Job Responsibilities
- Design and build advanced applications for the iOS platform
- Collaborate with cross-functional teams to define, design, and ship new features
- Work with outside data sources and APIs
- Unit-test code for robustness, including edge cases, usability, and general reliability
- Work on bug fixing and improving application performance
- Continuously discover, evaluate, and implement new technologies to maximize development efficiency
Their services are available across the globe, with over 65% of their client base being from US, UK, and Canada. The company's primary focus is on Ayurveda and taking the ancient knowledge to anyone who wishes to bring back balance to their health and apply the tools in their everyday life.
As a Ayurvedic Copywriter, you will be responsible for researching and writing about a variety of topics on skincare, health, diet, and lifestyle from the Ayurvedic perspective. These articles are meant for foreign markets including USA and EU.
What you will do:
- Creating articles about a variety of Ayurvedic topics for foreign markets (i.e. USA and EU)
- Researching Ayurvedic background and benefits of ingredients used in the organization’s products
- Researching our target audience’s interests and needs
- Creating highly valuable articles with information that’s useful for foreign markets
- Staying up-to-date with latest trends in USA and EU regarding Ayurveda, skincare, health and wellness
- Creating mind-blowing value for our customers
- Ensuring compliance with law (e.g. copyright and regulatory bodies) (training will be given in-house)
Desired Candidate Profile
What you need to have:- Degree in Ayurveda
- Interest for and experience in creative copywriting
- Good organizational, self and time-management skills
- Ability to multi-task and follow deadlines
- Willingness to learn skills outside of comfort zone
- Willingness to grow in a fast-paced environment
- Not afraid of challenges and hard work
- Reliable team player with a sense of ownership
- Attention to detail
- Ability to take initiative
Must Have Skills:
- Good experience in Pyspark - Including Dataframe core functions and Spark SQL
- Good experience in SQL DBs - Be able to write queries including fair complexity.
- Should have excellent experience in Big Data programming for data transformation and aggregations
- Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
- Good customer communication.
- Good Analytical skills
Technology Skills (Good to Have):
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
- Azure Synapse or Azure SQL data warehouse
- Spark on Azure is available in HD insights and data bricks
Dear Candidate,,
Greetings of the day!
As discussed, Please find the below job description.
Job Title : Hadoop developer
Experience : 3+ years
Job Location : New Delhi
Job type : Permanent
Knowledge and Skills Required:
Brief Skills:
Hadoop, Spark, Scala and Spark SQL
Main Skills:
- Strong experience in Hadoop development
- Experience in Spark
- Experience in Scala
- Experience in Spark SQL
Why OTSi!
Working with OTSi gives you the assurance of a successful, fast-paced career.
Exposure to infinite opportunities to learn and grow, familiarization with cutting-edge technologies, cross-domain experience and a harmonious environment are some of the prime attractions for a career-driven workforce.
Join us today, as we assure you 2000+ friends and a great career; Happiness begins at a great workplace..!
Feel free to refer this opportunity to your friends and associates.
About OTSI: (CMMI Level 3): Founded in 1999 and headquartered in Overland Park, Kansas, OTSI offers global reach and local delivery to companies of all sizes, from start-ups to Fortune 500s. Through offices across the US and around the world, we provide universal access to exceptional talent and innovative solutions in a variety of delivery models to reduce overall risk while optimizing outcomes & enabling our customers to thrive in a global economy.http://otsi-usa.com/?page_id=2806">
OTSI's global presence, scalable and sustainable world-class infrastructure, business continuity processes, ISO 9001:2000, CMMI 3 certifications makes us a preferred service provider for our clients. OTSI has the expertise in different technologies enhanced by our http://otsi-usa.com/?page_id=2933">partnerships and alliances with industry giants like HP, Microsoft, IBM, Oracle, and SAP and others. Highly repetitive local company with a proven success of serving the UAE Government IT needs is seeking to attract, employ and develop people with exceptional skills who want to make a difference in a challenging environment.Object Technology Solutions India Pvt Ltd is a leading Global Information Technology (IT) Services and Solutions company offering a wide array of Solutions for a range of key Verticals. The company is headquartered in Overland Park, Kansas, and has a strong presence in US, Europe and Asia-Pacific with a Global Delivery Center based in India. OTSI offers a broad range of IT application solutions and services including; e-Business solutions, Enterprise Resource Planning (ERP) implementation and Post Implementation Support, Application development, Application Maintenance, Software customizations services.
OTSI Partners & Practices
- SAP Partner
- Microsoft Silver Partner
- Oracle Gold Partner
- Microsoft CoE
- DevOps Consulting
- Cloud
- Mobile & IoT
- Digital Transformation
- Big data & Analytics
- Testing Solutions
OTSI Honor’s & Awards:
- #91 in Inc.5000 .
- Fastest growing IT Companies in Inc.5000…
• Javascript, Angular 2.0, React, React Native, Angular Ionic (optional), JQuery, MYSQL,
MongoDB, Firebase.
• Comfortable with REST API development using Java or python.
• Exposure to Artificial Intelligence / Machine Learning concepts will be a plus.










