11+ Supervisory management Jobs in Bangalore (Bengaluru) | Supervisory management Job openings in Bangalore (Bengaluru)
Apply to 11+ Supervisory management Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Supervisory management Job opportunities across top companies like Google, Amazon & Adobe.
Join us in making the fastest fashion brand on earth. We live & breathe trends. Working on 3D Virtual Design Technology we launch new trends even before they're produced. Making us the only fast fashion brand to be sustainable at the same time. Enabling young shoppers, first-hand access to latest styles with our Digital Only Collection.
We waste nothing but data & exploit nothing but our imagination. We operate at the intersection of fashion & technology. #SpeakSomethingNew is our mantra.
We're looking for a production ninja who can look after order management, production and sourcing. We don't hire many. A small, highly passionate team working to redefine fashion. Apply only if you can execute at breakneck speed, can handle stress, highly passionate about fashion-tech.
Job Description:
Requirements
Daily report to the Directors with the daily schedule execution plan.
Overseeing the production process
Ensuring the overall production is cost effective & meeting quality standards.
Drawing up a production schedule keeping the TAT in mind.
Working out the human and material resources needed
Being responsible for the selection and timely maintenance of equipment.
Looking after job works / team (such as printing, embroidery) and taking decisions on selection vendors for job work.
Supervising and motivating a team of workers
Identify training needed, if any.
Managing stock control and inventory checks of raw materials and finished goods.
Processing & Managing E-commerce portal. Which includes managing day-to-day order processing.
Managing order, delivery partner dashboards. Generating shipping labels and tracking dispatches.
Handling customer requests of exchange and returns.
Coordinate customer operations to track exceptions. Keep a record of customer interaction, transaction, comments, and complaints
Desired Candidate Profile:
- Candidate from the Garment /fashion industry is surely required.
- Sound knowledge of Mens Wear Cutting & stitching
- Should have excellent leadership, administrative, interpersonal and relationship management skills.
- Good communication skills
Skills and Interests:
Must be able to multitask
Have a certain amount of professionalism
Be able to manage time and people efficiently
Be willing to adapt and collaborate with factory environment
Have garment production and planning management experience
Be able to prepare reports and plan
Able to work under pressure
Good at managing budgets
Have good communication and presentation skills
Have a positive attitude to work and be able to motivate a team
The DataFlow Group is a leading global provider of specialized Primary Source Verification (PSV) solutions, background screening and immigration compliance services. The DataFlow Group partners with clients across the public and private sectors to assist them in mitigating potential risk by exposing fraudulent:
Duties and responsibilities:
Key Responsibilities:
● Model Selection & Training:
○ Lead the process of selecting and training state-of-the-art machine
learning models tailored to specific computer vision tasks, including object detection, image segmentation, and anomaly detection.
○ Apply advanced techniques for model fine-tuning and optimization to
achieve the best performance for various applications.
○ Leverage your expertise in YOLO, ViT (Vision Transformers), and other popular architectures to develop high-performing models.
● Computer Vision & Image Processing:
○ Develop and implement computer vision algorithms for tasks such as
object detection, image segmentation and anomaly detection.
○ Use tools like OpenCV for image pre-processing, augmentation, and other necessary transformations for robust model training.
● Model Deployment & Serving:
○ Deploy machine learning models in production environments using
TensorFlow or PyTorch Serving to create highly scalable and performant inference pipelines.
○ Ensure smooth integration of models with production systems and
optimise them for latency, throughput, and memory efficiency.
● MLOps & Data Pipelines:
○ Build and maintain end-to-end machine learning pipelines, from data
ingestion and processing to model inference and monitoring.
○ Apply best practices in ML Ops for versioning, tracking experiments,
automating workflows, and managing deployments.
○ Work with cloud platforms (AWS, GCP, Azure) and containerization
technologies (Docker, Kubernetes) for model serving.
● SQL & Data Management:
○ Write optimised and efficient SQL queries to extract and manipulate large datasets for model training and evaluation.
○ Analyse structured and unstructured data to generate insights and
features for model development.
● Model Evaluation & Performance Metrics:
○ Use statistical methods and metrics to evaluate model performance,
ensuring high accuracy, precision, recall, F1 score, etc., on real-world tasks.
Skills/ Qualifications:
1. Qualifications:
a. Bachelors or Masters. in Computer Science, Machine Learning, Artificial Intelligence, or related field.
b. 5+ years of hands-on experience in data science and machine learning, with a focus on computer vision.
2. Technical know-how:
a. Strong expertise in computer vision tasks, including object detection (YOLO), image segmentation, and image classification.
b. Knowledge of using tools like LabelMe to create image annotations required for training data labelling.
c. In-depth knowledge of deep learning frameworks such as TensorFlow, PyTorch, and Keras.
d. Familiarity with Vision Transformer (ViT) models and their applications.
e. Proficiency with OpenCV for image pre-processing, augmentation, and other related tasks.
f. Experience in building and deploying machine learning models using
TensorFlow Serving, PyTorch Serving, or similar tools.
g. Strong MLOps experience, with hands-on knowledge of creating and
maintaining data pipelines, automating workflows, and managing model
deployments using tools like Docker, Kubernetes, and cloud platforms (AWS, GCP, Azure).
3. Data Handling:
a. Expertise in writing optimised SQL queries for large-scale data processing and analysis.
b. Ability to work with large, complex datasets and efficiently query, clean, and process them for training and inference purposes.
4. Model Evaluation:
a. Strong understanding of statistical methods and metrics to evaluate machine learning model performance.
b. Experience with A/B testing, model validation techniques, and performance benchmarking.
Good to have:
● Experience with large-scale distributed machine learning systems.
● Knowledge of advanced techniques such as self-supervised learning, transfer learning, or reinforcement learning.
● Familiarity with data versioning and experiment tracking tools like MLflow or DVC.
● Experience in integrating machine learning models into production environments for real-time inference.
Experience in Automation Testing
Experience in BDD Framework
Experience in Selenium tool
Experience in Java Programming
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
People First consultant
Guidewire developer-PAN India
Role : Guidewire developer
Location : Pan India
Years of experience : 3 - 15 years
Roles and Responsibilities :
- Candidate having at least 3+ years of experience as guidewire developer
- Candidate having experience in Policy center(PC) or Claim center(CC)
- Experience in configuration or integration
- Candidate having Specialist certificate of guidewire
Regards
Sundaravalli
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
● The candidate will be part of the Engineering leadership team and will help with strategies and execute the product roadmap.
● Work closely with Product and business teams to strategize or design the features and product experiments.
● Lead a team of 15-30 Engineers, develop engineers on the team and help them advance in their careers.
● Developing project scopes and objectives, involving all relevant stakeholders, and ensuring technical feasibility.
● Ensure resource availability and allocation.
● Develop a detailed project plan to track progress, lead meetings, and set expectations for the project team.
● Perform risk management to minimize project risks.
● Conduct regular 1-1s with the team.
● Scale the technology architecture, team, and product to drive multiple growths in the next 2-3 years.
● Preferably from LAMP/MEAN, stack and good exposure with scalable and distributed systems using microservices.
● Exposure to various cloud hosting environments(Preferably AWS).
Requirements:
● Bachelor's degree required; masters preferred.
● You have managed engineering teams that have a strong record of developing and delivering products.
● Proven working experience as a senior engineering manager in the information technology sector.
● You put a strong emphasis on recruiting and developing your team.
● You have an eye for great products and can work effectively with engineers, product managers,
and designers to build them.
● You are deeply technical but prefer to lean on your leadership skills.
● You are a strong communicator who can streamline the flow of information between Engineering and other teams.
● Solid organizational skills including attention to detail and multi-tasking skills.
● You have a curiosity about how things work.
● PMP / PRINCE II certification is a plus(Theoretical and practical project management knowledge).
● Excellent decision-making and leadership capabilities.
● A minimum of 8+ IT experience in leading multi-skilled teams involving product, mobile, and web application development & Engineering.
Work shift: Day time
- Strong problem-solving skills with an emphasis on product development.
insights from large data sets.
• Experience in building ML pipelines with Apache Spark, Python
• Proficiency in implementing end to end Data Science Life cycle
• Experience in Model fine-tuning and advanced grid search techniques
• Experience working with and creating data architectures.
• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests and proper usage, etc.) and experience with applications.
• Excellent written and verbal communication skills for coordinating across teams.
• A drive to learn and master new technologies and techniques.
• Assess the effectiveness and accuracy of new data sources and data gathering techniques.
• Develop custom data models and algorithms to apply to data sets.
• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Coordinate with different functional teams to implement models and monitor outcomes.
• Develop processes and tools to monitor and analyze model performance and data accuracy.
Key skills:
● Strong knowledge in Data Science pipelines with Python
● Object-oriented programming
● A/B testing framework and model fine-tuning
● Proficiency in using sci-kit, NumPy, and pandas package in python
Nice to have:
● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes
● Unit testing, Test-driven development practice
● DevOps, Continuous integration/ continuous deployment experience
● Agile development environment experience, familiarity with SCRUM
● Deep learning knowledge