

Koantek
http://www.koantek.comAbout
Jobs at Koantek

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

The Sr AWS/Azure/GCP Databricks Data Engineer at Koantek will use comprehensive
modern data engineering techniques and methods with Advanced Analytics to support
business decisions for our clients. Your goal is to support the use of data-driven insights
to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and
patterns, insights, and trends to decision-makers. You will help design and build data
pipelines, data streams, reporting tools, information dashboards, data service APIs, data
generators, and other end-user information portals and insight tools. You will be a critical
part of the data supply chain, ensuring that stakeholders can access and manipulate data
for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and
communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Requirements:
Strong experience as an AWS/Azure/GCP Data Engineer and must have
AWS/Azure/GCP Databricks experience. Expert proficiency in Spark Scala, Python, and spark
Must have data migration experience from on-prem to cloud
Hands-on experience in Kinesis to process & analyze Stream Data, Event/IoT Hubs, and Cosmos
In depth understanding of Azure/AWS/GCP cloud and Data lake and Analytics
solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks. Extensive hands-on experience implementing data migration and data processing
using AWS/Azure/GCP services
In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib
Hands-on experience with the Technology stack available in the industry for data
management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc
Hands-on knowledge of data frameworks, data lakes and open-source projects such
asApache Spark, MLflow, and Delta Lake
Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN]
Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair
Experience preparing data for Data Science and Machine Learning with exposure to- model selection, model lifecycle, hyperparameter tuning, model serving, deep
learning, etc
Demonstrated experience preparing data, automating and building data pipelines for
AI Use Cases (text, voice, image, IoT data etc. ). Good to have programming language experience with. NET or Spark/Scala
Experience in creating tables, partitioning, bucketing, loading and aggregating data
using Spark Scala, Spark SQL/PySpark
Knowledge of AWS/Azure/GCP DevOps processes like CI/CD as well as Agile tools
and processes including Git, Jenkins, Jira, and Confluence
Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics
Strong understanding of Data Modeling and defining conceptual logical and physical
data models. Big Data/analytics/information analysis/database management in the cloud
IoT/event-driven/microservices in the cloud- Experience with private and public cloud
architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements
that will enhance data quality and reliability to advance strategic initiatives
Working knowledge of RESTful APIs, OAuth2 authorization framework and security
best practices for API Gateways
Guide customers in transforming big data projects, including development and
deployment of big data and AI applications
Guide customers on Data engineering best practices, provide proof of concept, architect solutions and collaborate when needed
2+ years of hands-on experience designing and implementing multi-tenant solutions
using AWS/Azure/GCP Databricks for data governance, data pipelines for near real-
time data warehouse, and machine learning solutions. Over all 5+ years' experience in a software development, data engineering, or data
analytics field using Python, PySpark, Scala, Spark, Java, or equivalent technologies. hands-on expertise in Apache SparkTM (Scala or Python)
3+ years of experience working in query tuning, performance tuning, troubleshooting, and debugging Spark and other big data solutions. Bachelor's or Master's degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience
Ability to manage competing priorities in a fast-paced environment
Ability to resolve issues
Basic experience with or knowledge of agile methodologies
AWS Certified: Solutions Architect Professional
Databricks Certified Associate Developer for Apache Spark
Microsoft Certified: Azure Data Engineer Associate
GCP Certified: Professional Google Cloud Certified

Similar companies
About the company
MyOperator is India's leading cloud communications provider, empowering over 10,000 businesses across industries with cutting-edge SaaS solutions. Our offerings include Cloud Call Center, IVR, Toll-free Numbers, Enterprise Mobility, WhatsApp Business Solutions, and Heyo Phone. We are committed to delivering innovation and exceptional customer service that drives business success.
Jobs
29
About the company
Jobs
8
About the company
Optimo Capital is a newly established NBFC with a mission to serve the underserved MSME businesses with their credit needs in India. Less than 15% of MSMEs have access to formal credit. We aim to bridge this credit gap by employing a phygital model (physical branches + digital decision-making).
Being a technology and data-first company, tech and data enthusiasts play a crucial role in building the tech & infra to help the company thrive.
Jobs
0
About the company
We are an InsurTech start-up based out of Bangalore, with a focus on Healthcare. CoverSelf empowers healthcare insurance companies with a truly NEXT-GEN cloud-native, holistic & customizable platform preventing and adapting to the ever-evolving claims & payment inaccuracies. Reduce complexity and administrative costs with a unified healthcare dedicated platform.
Jobs
3
About the company
Averlon is an innovative cloud security company that revolutionizes how organizations approach cybersecurity. Their platform performs deep graph analysis of cloud environments, mapping potential attack paths from code to cloud. By focusing on true risks rather than overwhelming users with alerts, Averlon enables security teams to identify and mitigate real-world threats efficiently.
The company's approach combines panoptic visibility, attack chain analysis, and rapid remediation capabilities. Trusted by leading enterprises, Averlon's solution stands out for its ability to cut through noise, prioritize critical vulnerabilities, and significantly reduce response times. This unique approach allows organizations to gain a clear picture of their security landscape quickly, empowering them to focus on the most impactful security measures in an increasingly complex cloud environment.
Jobs
1
About the company
Are you ready to be at the forefront of the AI revolution? Moative is your gateway to reshaping industries through cutting-edge Applied AI Services and innovative Venture Labs.
Moative is an AI company that focuses on automating tasks, compressing workflows, predicting demand, pricing intelligently, and delighting customers. They design AI roadmaps, build co-pilots, and create predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries.
🚀 What We Do
At Moative, we're not just using AI – we're redefining its potential. Our mission is to empower businesses in energy, utilities, packaging, commerce, and other primary industries with AI solutions that drive unprecedented productivity and growth.
🔬 Our Expertise:
- Design tailored AI roadmaps
- Build intelligent co-pilots for specialists
- Develop predictive AI solutions
- Launch AI micro-products through Moative Labs
💡 Why Moative?
- Innovation at Core: We're constantly pushing the boundaries of AI technology.
- Industry Impact: Our solutions directly influence the cost of goods sold, helping clients surpass industry profit margins.
- Customized Approach: We fine-tune fundamental AI models to create unique, intelligent systems for each client.
- Continuous Learning: Our systems evolve and improve, ensuring long-term value.
🧑🦰Founding Team
Shrikanth and Ashwin, IIT-Madras alumni have been riding technology waves since the dotcom era. Our latest venture, PipeCandy (Data & Predictions on 12 million eCommerce sellers) was acquired in 2021. We have built analytical models for industrial giants, advised enterprise AI platforms on competitive positioning, and built 70 member AI team for our portfolio companies since 2023.
Jobs
5
About the company
Asha Health is a Y Combinator backed AI healthcare startup. We help medical practices spin up their own AI clinic. We've raised an oversubscribed seed round backed by top Silicon Valley investors, and are growing rapidly. Our team consists of AI product experts from companies like Google, as well as senior physician executives from major health systems.
Jobs
3
About the company
KrispCall is experiencing rapid growth, and we're looking for talented individuals to join us on this exciting journey. We offer a dynamic and rewarding work environment where you can learn, grow, and contribute to a product that's transforming the way businesses communicate. We value our employees and provide opportunities for professional development and advancement. Explore our open positions and discover your potential at KrispCall!
Jobs
2
About the company
Jobs
3