

Tecblic Private LImited
https://www.tecblic.comAbout
Company social profiles
Jobs at Tecblic Private LImited

Job Description: Machine Learning Engineer – LLM, Agentic AI, Computer Vision, and MLOps
Location: Ahmedabad
Experience: 4 to 6 years
Employment Type: Full-Time
About Us
Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, Computer Vision, MLOps, and pushing the boundaries of Agentic AI, we’d love to have you on board.
Key Responsibilities
- Research and Development: Design, develop, and fine-tune machine learning models across LLM, computer vision, and Agentic AI use cases.
- Model Optimization: Fine-tune and optimize pre-trained models, ensuring performance, scalability, and minimal latency.
- Computer Vision: Build and deploy vision models for object detection, classification, OCR, and segmentation.
- Integration: Work closely with software and product teams to integrate models into production-ready applications.
- Data Engineering: Develop robust data pipelines for structured, unstructured (text/image/video), and streaming data.
- Production Deployment: Deploy, monitor, and manage ML models in production using DevOps and MLOps practices.
- Experimentation: Prototype and test new AI approaches such as reinforcement learning, few-shot learning, and generative AI.
- DevOps Collaboration: Collaborate with the DevOps team to ensure CI/CD pipelines, infrastructure-as-code, and scalable deployments are in place.
- Technical Mentorship: Support and mentor junior ML and data engineers.
Requirements
Core Technical Skills
- Strong Python skills for machine learning and computer vision.
- Hands-on experience with PyTorch, TensorFlow, Hugging Face, Scikit-learn, OpenCV.
- Deep understanding of LLMs (e.g., GPT, BERT, T5) and Computer Vision architectures (e.g., CNNs, Vision Transformers, YOLO, R-CNN).
- Strong knowledge of NLP tasks, image/video processing, and real-time inference.
- Experience in cloud platforms: AWS, GCP, or Azure.
- Familiarity with Docker, Kubernetes, and serverless deployments.
- Proficiency in SQL, Pandas, NumPy, and data wrangling techniques.
DevOps & MLOps Skills
- Experience with CI/CD tools such as GitHub Actions, GitLab CI, Jenkins, etc.
- Knowledge of Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Familiarity with container orchestration and Kubernetes-based ML model deployment.
- Hands-on experience with ML pipelines and monitoring tools: MLflow, Kubeflow, TFX, or Seldon.
- Understanding of model versioning, model registry, and automated testing/validation in ML workflows.
- Exposure to observability and logging frameworks (e.g., Prometheus, Grafana, ELK stack).
Additional Skills (Good to Have)
- Knowledge of Agentic AI systems and use cases.
- Experience with generative models (e.g., GANs, VAEs) and RL-based architectures.
- Prompt engineering and fine-tuning for LLMs in specialized domains.
- Working with vector databases (e.g., Pinecone, FAISS, Weaviate).
- Distributed data processing using Apache Spark, Dask.
General Skills
- Strong foundation in mathematics, including linear algebra, probability, and statistics.
- Deep understanding of data structures and algorithms.
- Comfortable handling large-scale datasets, including images, video, and multi-modal data.
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication skills for cross-functional collaboration.
- Self-motivated, adaptive, and committed to continuous learning.

Job Description: Data Engineer
Location: Ahmedabad
Experience: 7+ years
Employment Type: Full-Time
We are looking for a highly motivated and experienced Data Engineer to join our team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.
Responsibilities
● Design and optimize data pipelines for various data sources
● Design and implement efficient data storage and retrieval mechanisms
● Develop data modelling solutions and data validation mechanisms
● Troubleshoot data-related issues and recommend process improvements
● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
● Coach and mentor junior data engineers in the team
Skills Required:
● Minimum 5 years of experience in data engineering or related field
● Proficient in designing and optimizing data pipelines and data modeling
● Strong programming expertise in Python
● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive
● Extensive experience with cloud data services such as AWS, Azure, and GCP
● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing
● Knowledge of distributed computing and storage systems
● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage
● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities
Qualifications
● Bachelor's degree in Computer Science, Data Science, or a Computer related field

Data Analytics Lead
Responsibilities:
· Oversee the design, development, and implementation of data analysis solutions to meet business needs.
· Work closely with business stakeholders and the Aviation SME to define data requirements, project scope, and deliverables.
· Drive the design and development of analytics data models and data warehouse designs.
· Develop and maintain data quality standards and procedures.
· Manage and prioritize data analysis projects, ensuring timely completion.
· Identify opportunities to improve data analysis processes and tools.
· Collaborate with Data Engineers and Data Architects to ensure data solutions align with the overall data platform architecture.
· Evaluate and recommend new data analysis tools and technologies.
· Contribute to the development of best practices for data analysis.
· Participate in project meetings and provide input on data-related issues, risks and requirements.
Qualifications
· 8+ years of experience as a Data Analytics Lead, with experience leading or mentoring a team.
· Extensive experience with cloud-based data modelling and data warehousing solutions, using Azure Data Bricks.
· Proven experience in data technologies and platforms, ETL processes and tools, preferably using Azure Data Factory, Azure Databricks (Spark), Delta Lake.
· Advanced proficiency in data visualization tools such as Power BI.
Data Analysis and Visualization:
- Experience in data analysis, statistical modelling, and machine learning techniques.
- Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling.
- Strong expertise in Power BI, Superset, Tablue for data visualization, data modelling, and DAX queries, with knowledge of best practices.
- Experience in implementing Row-Level Security in Power BI.
- Ability to work with medium-complex data models and quickly understand application data design and processes.
- Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations.
- Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques.
Data Handling and Processing:
- Proficient in SQL Server and query optimization.
- Expertise in application data design and process management.
- Extensive knowledge of data modelling.
- Hands-on experience with Azure Data Factory,Azure Databricks.
- Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services).
- Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization.
- Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing.
Understanding of data governance, compliance, and security measures within Azure environments.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Data Architecture and Engineering Lead
Responsibilities:
- Lead Data Architecture: Own the design, evolution, and delivery of enterprise data architecture across cloud and hybrid environments. Develop relational and analytical data models (conceptual, logical, and physical) to support business needs and ensure data integrity.
- Consolidate Core Systems: Unify data sources across airport systems into a single analytical platform optimised for business value.
- Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, etc.
- Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage.
- Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics.
- Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including forecasting, pricing models, and personalisation.
- Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Operations, Finance, HR, Legal, Technology.
- Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms.
Qualifications:
- 12+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments
- Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases
- Hands-on expertise with ingestion, transformation, and orchestration pipelines
- Extensive experience with Microsoft Azure data services, including Azure Data Lake Storage, Azure Databricks, Azure Data Factory and related technologies.
- Strong knowledge of ERP data models, especially SAP and MS Dynamics
- Experience with data governance, compliance (GDPR/CCPA), metadata cataloguing, and security practices
- Familiarity with distributed systems and streaming frameworks like Spark or Flink
- Strong stakeholder management and communication skills, with the ability to influence both technical and business teams
Tools & Technologies
- Warehousing: Azure Databricks Delta, BigQuery
- Big Data: Apache Spark
- Cloud Platforms: Azure (ADLS, AKS, EventHub, ServiceBus)
- Streaming: Kafka, Pub/Sub
- RDBMS: PostgreSQL, MS SQL
- NoSQL: Redis
- Monitoring: Azure Monitoring, App Insight, Prometheus, Grafana

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

Job Title: MERN Stack Developer (5+ Years Experience)
Position: MERN Stack Developer
Location: Ahmedabad (On-Site)
Onshore Opportunity
Experience: 5 to 7
Joining: Immediate Joiners Preferred
Employment Type: Full-Time
About the Role
We are seeking a talented and experienced MERN Stack Developer to join our on-site team. As a key member of our development team, you will be responsible for building and maintaining high-performance web applications using MongoDB, Express.js, React.js, and Node.js.
Key Responsibilities
- Develop and maintain scalable, responsive web applications using the MERN stack
- Write clean, efficient, and well-documented code
- Integrate APIs and third-party services
- Work closely with UI/UX designers, QA, and product teams
- Optimize applications for maximum speed and scalability
- Perform code reviews and provide constructive feedback
- Debug and resolve technical issues and bugs
- Ensure cross-platform and cross-browser compatibility
Required Skills & Qualifications
- 5+ years of professional experience with the MERN stack
- Strong proficiency in JavaScript, ES6+, HTML5, and CSS3
- In-depth experience with React.js (including hooks, Redux, component lifecycle)
- Strong backend development skills using Node.js and Express.js
- Proficient in working with MongoDB, including aggregation, indexing, and schema design
- Experience with RESTful APIs and JSON
- Familiarity with Git and version control workflows
- Strong problem-solving and debugging skills
- Excellent communication and teamwork abilities
Must-Have Experience
- 5+ years working professionally with the MERN stack
- Has experience in a team lead role
- Expertise in JavaScript (ES6+), HTML5, CSS3
- Deep understanding of React.js (Hooks, Redux, lifecycle)
- Backend development with Node.js & Express.js
- Strong hands-on with MongoDB (schemas, indexing, aggregation)
- API integration, Git, version control, and debugging
Preferred Qualifications
- Experience with deployment on AWS, Heroku, or similar cloud platforms
- Familiarity with containerization tools like Docker
- Experience with testing frameworks such as Jest or Mocha
- Knowledge of agile methodologies

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀
Job description
🔍 Job Title: Data Engineer
📍 Location: Ahmedabad
🚀 Work Mode: On-Site Opportunity
📅 Experience: 4+ Years
🕒 Employment Type: Full-Time
⏱️ Availability : Immediate Joiner Preferred
Join Our Team as a Data Engineer
We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.
As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.
Your Key Responsibilities
Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.
Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.
Implement data validation, transformation, and quality monitoring processes.
Collaborate with cross-functional teams to deliver impactful, data-driven solutions.
Proactively identify bottlenecks and optimize existing workflows and processes.
Provide guidance and mentorship to junior engineers in the team.
Skills & Expertise We’re Looking For
3+ years of hands-on experience in Data Engineering or related roles.
Strong expertise in Python and data pipeline design.
Experience working with Big Data tools like Hadoop, Spark, Hive.
Proficiency with SQL, NoSQL databases, and data warehousing solutions.
Solid experience in cloud platforms - Azure
Familiar with distributed computing, data modeling, and performance tuning.
Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.
Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.
Qualifications
Bachelor’s degree in Computer Science, Data Science, or a related field.

Similar companies
About the company
We are a software product company and we are on a mission to decarbonize the world.
Intelligent analytics is one of the most powerful tools we have to achieve this goal. Our tech stack is built around Python, Django, VueJS, and D3.
We believe that the most rewarding work is often the most challenging and we embrace challenges that push us to our limits. The challenge of doing simple things well appeals to us.
こだわり (kodawari), a Japanese term for the uncompromising and relentless pursuit of perfection through attention to detail, is at the heart of our approach.
We advise portfolios that include renewable generation, standalone storage and generation plus storage assets across a range of activities including project development, valuation, project finance, operation, optimization, and risk management. We provide regional coverage in the US, EMEA, and APAC. Our customers include asset owners, asset operators, and institutional investors (debt and equity). Read more at 1E9 Advisors (enine.dev) & Financial Machines (finmachines.com)
Jobs
1
About the company
Welcome to OpenGrowth - the ultimate one-stop platform for startups! We believe in empowering startups to achieve their full potential by helping them to hire on-demand and fractional experts, valuable resources, Courses, community, and customized growth services. Our platform brings together a community of experts who can help startups with the guidance and support they need to overcome challenges and drive growth. We also offer a wealth of resources, including blogs, webinars, and collaboration tools, to help startups stay informed and connected. At OpenGrowth, we understand that startups face unique challenges and opportunities. That's why we offer tailored growth services that help startups navigate these challenges and achieve their goals. Join us today and discover how OpenGrowth can help take your startup to the next level
Jobs
3
About the company
Sim Gems Group is a leading diamond manufacturer, miner, wholesaler, and distributor of natural diamonds. Established in 1993, the company is dedicated to providing customers with cut and polished stones of the highest quality and unmatched brilliance. They focus on ethical sourcing, following the Kimberley Process Certification Scheme, and prioritize sustainable practices in their operations.
Jobs
1
About the company
At PeopleX Ventures, we take great pride in our role as a recruiting partner, dedicated to fulfilling the unique staffing needs of our clients across levels for both technical and non-technical domains, as well as, hiring CXO and CXO -1 across functions and roles, where we ensure we take up only a limited number of roles so that we can deliver successful outcomes.
Our distinctive strength is derived from our exceptional freelance team members, some of whom possess over a decade of valuable experience in corporate and consulting positions, hiring for organizations such as Google, Meta, Flipkart, Intuit, Adobe, Microsoft, Walmart PLUS many early/late stage startups. We as a team carry varied strengths hiring across engineering, product, sales & marketing, finance, HR, etc across levels. Our clientele includes startup organizations (Pre-Series, Series A, B, C, D) and product companies. However, we continue to experiment with organizations we can support.
Jobs
13
About the company
Jobs
261
About the company
Jobs
5
About the company
Jobs
1
About the company
ZestFindz Private Limited is a Hyderabad-based startup founded in February 2025.
We simplify online retail by offering a curated marketplace for everyday essentials, fashion, home goods, skincare, and more backed by powerful seller tools. Our goal: make selling and shopping seamless with solid tech, transparent operations and customer-first design.
Jobs
3
About the company
ConvertLens is an AI-driven Marketing ROI & Lead Optimization Platform built for dental practices and DSOs. It brings together campaign data, call tracking, form submissions, and PMS insights into a unified dashboard, helping practices clearly measure what drives patient growth. Beyond reporting, ConvertLens improves results through AI-powered SMS, email, and voice workflows that integrate directly with practice management systems—so practices can engage new patients within minutes and automate follow-ups seamlessly.
ConvertLens is backed by its parent company, Remedo, founded by Richeek Arya, Ruchir Mehra, and Harsh Vardhan Bansal. Remedo is a leading health-tech platform offering practice management, patient communication, and engagement solutions, and is trusted by thousands of doctors and practices across India and abroad. With this strong foundation in healthcare technology, the ConvertLens team brings the same expertise and vision to transform dental growth through clarity, automation, and measurable impact.
Jobs
2
About the company
Jobs
2