

JK Technosoft Ltd
https://jktech.comAbout
Jobs at JK Technosoft Ltd
About the Role:
We are looking for a Data Architect with a strong background in data engineering & cloud data platforms. The ideal candidate will design and implement scalable data architectures that power enterprise analytics, AI/ML, and GenAI solutions — ensuring data availability, quality, and governance across the organization.
Key Responsibilities:
Data Architecture & Strategy
- Design & Architecture: Design and implement robust, scalable, and optimized data engineering solutions on the Databricks platform. Architect data pipelines that scale efficiently and reliably.
- Data Pipeline Development: Develop ETL/ELT pipelines leveraging Databricks notebooks, Delta Lake, Snowflake tech stack, Azure Data Factory etc.
- Cloud Integration: Work closely with cloud platforms like Azure, AWS, or GCP to integrate Databricks or Snowflake with data storage (e.g., ADLS, S3, etc.), databases, and other services.
- Performance Optimization: Optimize the performance of data workflows by tuning Databricks clusters, improving query performance, and identifying bottlenecks in data processing.
- Collaboration: Collaborate with data scientists, analysts, and business stakeholders to understand business requirements and translate them into scalable data solutions.
- Data Governance & Security: Ensure best practices for data security, governance, and compliance when working with sensitive or large datasets.
- Automation & Monitoring: Automate data pipeline deployments and create monitoring dashboards for ongoing performance checks.
- Continuous Improvement: Stay up to date with the latest Databricks features and Snowflake eco system best practices to continuously improve existing systems and processes.
Required Skills & Experience:
- 12+ years of experience in Data Architecture / Data Engineering roles.
- Proven expertise in data modeling, ETL/ELT design, and cloud-based data solutions (AWS Redshift, Snowflake, BigQuery, or Synapse).
- Hands-on experience with data pipeline orchestration tools (Airflow, DBT, Azure Data Factory, etc.).
- Proficiency in Python, SQL, and Spark for data processing and integration.
- Experience with API integrations and data APIs for AI systems.
- Excellent communication and stakeholder management skills.
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Roles and Responsibilities:
· Design and implement scalable web applications and platforms using technologies such as Typescript, NestJS, Angular, NodeJS, ExpressJS, TypeORM, and Postgres
· Good understanding of web and REST API design patterns
· Experience with AWS technologies such as EKS, ECS, ECR, Fargate, EC2, Lambda, ALB will be an added advantage
· Hands-on experience with unit test frameworks like Jest
· Good working knowledge of JIRA, Confluence, Git
· Basic knowledge of Kubernetes and Terraform for infrastructure as code
· Basic knowledge of Docker compose and Docker
· Strong understanding of microservices architecture and ability to implement components independently
· Proven track record of problem-solving skills
· Excellent communication skills
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Experience – 8 – 10 years
Location - NCR
Roles and Responsibilities:
System Analyst
The individual in this role will gather, document, and analyze client functional, transactional, and insurance business requirements across all insurance functions and third-party integrations. System Analyst will also work within a cross-functional project team to provide business analytical support and leadership from the business side. The individual will play a highly visible, client-facing, and the consultative role and have the ability to offer system solutions to enhance client implementations and transform client workflow and business processes. Individual should be very good in mapping business functions / attributes with the Insurance rules and data.
Skills Required:
- A successful candidate in this role must:
- Good hands-on skills in Oracle, PL/SQL, TSQL
- Object oriented knowledge should be good
- Have Functional knowledge on P & C
- Overall Tech - 60% and Functional - 40%
Primary Skills:
- Good Data Mapping knowledge, RDBMS / SQL Knowledge
Secondary Skills:
- Oracle Big+
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Roles and Responsibilities:
Hands-on experience on Angular, CSS, Scripting, NodeJs/express
Experience in Responsive Web, Automation, CI/CD, Github, Microservices, Postgres (or any other RDBMS)
Experience in AWS (SQS, SNS, Cognito)
Implementing Observability/Monitoring.
Strong experience in REST APIs
Similar companies
About the company
REConnect Energy is a leading digital energy platform startup focused on climate resilience solutions. Headquartered in Bangalore, India, with offices in London, Gurgaon, and Mumbai, the company has established itself as India's largest tech-enabled service provider in predictive analytics and demand-supply aggregation for the energy sector. REConnect Energy develops AI and Grid Automation software products for renewables and energy utilities, with a core focus on efficient asset and grid management, climate risk mitigation, and real-time asset visibility.
REConnect Energy offers a comprehensive range of services, including predictive analytics for electric utilities, renewable energy forecasting and grid integration, machine learning and AI for energy markets, and an OTC marketplace for clean energy. The company also specializes in environmental markets, renewable energy policies, and energy dispatch and aggregation. Positioned at the forefront of the energy transition, REConnect Energy addresses complex challenges in climate data and analytics, driving innovation in the renewable energy sector.
Jobs
3
About the company
We are Proximity - a global team of coders, designers, product managers, geeks and experts. We solve hard, long-term engineering problems and build cutting edge tech products.
About us
Born in 2019, Proximity Works is a global, fully distributed tech firm headquartered in San Francisco - with hubs across Mumbai, Dubai, Toronto, Stockholm, and Bengaluru. We’re in the business of solving high-stakes engineering challenges with AI-powered solutions tailored for industries like sports, media & entertainment, fintech, and enterprise platforms. From real-time game analytics and ticketing workflows to creative content generation, we help build software that serves millions every day.
About the Founders
At the helm is Hardik Jagda, CEO - a technologist with a startup DNA who brings clarity to complexity and a passion for building delightful experiences.
Milestones & Impact
- Trusted by some of the world’s biggest players - from major media & entertainment giants to one of the world’s largest cricket websites and the second-largest stock exchange in the world.
- Delivered game-changing tech: slashing content creation by 90%, doubling performance metrics for NASDAQ clients, and accelerating speed/performance wins for platforms like Dream11.
Culture & Why It Matters
- Fully distributed and flexible: work 100% remotely, design your own schedule, build habits that work for you and not the other way around.
- People-first culture: Community events, “Proxonaut battles,” monthly off-sites, and a liberal referral policy keep us connected even when we’re apart.
- High-trust environment: autonomy is encouraged. You’re empowered to act, learn fast, and iterate boldly. We know great work comes when talented people have space to think and create.
Jobs
2
About the company
Jobs
2
About the company
enParadigm is one of the world's leading experiential learning and talent intelligence companies. We leverage Generative AI & Immersive AI solutions to create hyper-personalised, immersive experiences, driving business impact and behavioural change across levels and functions.
We have been recognized among the fastest growing tech companies in APAC by Deloitte as part of the Deloitte Tech Fast 500 APAC program. We leverage our proprietary simulations, and a rigorous sustained-learning approach. Learn more about our work. We have worked with 500+ organisations around the world such as Coca-Cola, Infosys, P&G, Societe Generale, Colgate-Palmolive, WNS, Citibank, etc, to help drive growth and leadership.
Jobs
2
About the company
Jobs
7
About the company
At Hunarstreet Technologies Pvt Ltd, we specialize in delivering India’s fastest hiring solutions, tailored to meet the unique needs of businesses across various industries. Our mission is to connect companies with exceptional talent, enabling them to achieve their growth and operational goals swiftly and efficiently.
We are able to achieve a success rate of 87% in relevancy of candidates to the job position and 62% success rate in closing positions shared with us.
Jobs
675
About the company
Jobs
9
About the company
Jobs
47
About the company
Jobs
3
About the company
Jobs
1





