



Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics?
What is your role?
As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows.
What you’ll do
- You will be working with the core R&D team which drives the computer vision and image processing development.
- Build deep learning model for our data and object detection on large scale images.
- Design and implement real-time algorithms for object detection, classification, tracking, and segmentation
- Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions.
- Automate the workflow process between the fast-paced data delivery systems.
What we are looking for
- 1 to 3+ years of professional experience in computer vision and machine learning.
- Extensive use of Python
- Experience in python libraries such as OpenCV, Tensorflow and Numpy
- Familiarity with a deep learning library such as Keras and PyTorch
- Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
- Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
- B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
- Dockerization, AWS modules and Production level modelling
- Basic knowledge of the Fundamentals of GIS would be added advantage
Prefered Requirements
- Experience with Qt, Desktop application development, Desktop Automation
- Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS
About Datasee.AI:
Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach.
With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests.
A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.

About DATASEE.AI, INC.
About
Datasee.AI was started to serve one purpose – make the power of big data and artificial intelligence easily accessible for every business at every stage of their operational cycle. With our image analytics platform, businesses can digitize their assets with a simple pane for all teams, avoid operational and organizational silos, mitigate risks, and increase profitability. While predominantly used by players in the renewable energy and farming sectors, our team is working to expand our capabilities across multiple industries. At Datasee.AI, we use cutting-edge technology to minimize human errors.
KEY POINTERS
1. Our average age of employees is 24!
2. One of the very few multi-disciplinary engineering companies in India with global operations (six countries & counting)
3. Multi-Disciplinary Engineering is our Forte - Engineers from Mechanical, Environmental, Geo-Informatics, Computer Science, Energy Engineering, Aerospace Engg, Product Design, Electrical Engg & even Irrigation Management
4. Focused on building products for clean energy management & monitoring
Join us to learn and grow together!
Company video


Connect with the team
Similar jobs
🚀 We’re Hiring: Performance Marketing Manager
📍Experience: 3–7 Years
📢 Location: Andheri, Marol
🔗 Join a fast-growing team that thrives on creativity, innovation, and results.
Are you a data-driven marketer with proven expertise in Google Ads, Meta Ads & Facebook Ads? If you’re fluent in communication, performance-focused, and seeking a meaningful growth opportunity, this could be the role for you.
✅ Key Responsibilities:
• Plan, execute & optimize paid campaigns across YouTube, Amazon, Programmatic, Affiliate Networks, etc.
• Manage complete campaign lifecycle: planning, A/B testing, execution & optimization
• Conduct keyword research and refine targeting for improved ROI
• Collaborate with creative, tech & content teams to align campaigns with business goals
• Monitor performance metrics like CAC, LTV, ROAS, and implement data-backed strategies for continuous growth
🛠 Technical Skills Required:
• 3–7 years of hands-on experience in performance marketing
• Solid understanding of CPC, CPA, CPL, ROI
• Familiarity with tools like DV360, HubSpot/Agile CRM, affiliate marketing platforms
• Experience in lead generation and purchase-driven campaigns
• WordPress/Shopify knowledge is a bonus
🌟 What We’re Looking For:
• Proactive, accountable, and results-driven mindset
• Excellent verbal & written communication
• Ability to manage multiple projects in a fast-paced environment
• Previous experience with app installs, conversion campaigns, and Brand Lift Studies is a plus
• A collaborative attitude with the ability to work independently

Company Description
KGISL Institute of Technology is a higher education institution located in Coimbatore, Tamil Nadu, India. As part of the KGISL Campus, the institute is committed to delivering quality education in the field of technology. With a focus on both theoretical and practical learning, KGISL Institute of Technology aims to produce skilled professionals who can meet the demands of the industry.
Role Description
This is a full-time on-site role for an Assistant Professor in the Computer Science and Engineering (CSE) department. Located in Coimbatore, the Assistant Professor will be responsible for teaching undergraduate and graduate courses, guiding student projects, and conducting research. Day-to-day tasks will include preparing lecture materials, evaluating student performance, mentoring students, and participating in faculty meetings. The role also involves contributing to curriculum development and engaging in departmental activities.
Qualifications
Strong knowledge in Computer Science and Engineering subjects
Experience in teaching at the undergraduate and graduate levels
Research skills and the ability to guide student projects
Excellent written and verbal communication skills
Proficiency in curriculum development and academic assessment
Master's degree in Computer Science, Engineering, M.Tech or a related field
Experience in the use of digital tools and teaching technologies is an advantage
Commitment to continuous improvement and professional development

Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.


Job description
Opportunity
We are looking for a young and enthusiastic personality who loves coding. Who is passionate and ready to tackle some of the most difficult problems in the laundry industry. We believe that user-centric design ultimately leads to the best products, so we listen closely to our users, both external and internal. As an engineer on our close-knit, cross-functional team, you'll be an active voice in shaping our product.
You will play a role in product planning, drive the implementation and release of major features, and be a champion of best practices for writing well-tested, well-organized code.
Tech Stack
Our tech stack is http://asp.net/">ASP.NET, MVC, Webform, WebAPI, MSSQL Server, JQuery, Javascript, Angular 2 and microservices architecture. Our platform is hosted on Google Cloud.
Responsibilities
- Work with internal business teams and management to define requirements and develop technical specifications.
- You will be creating modules and components and coupling them together into a functional app.
- Creating self-contained, reusable, and testable modules and components
- Developing and integrating RESTful API services.
- Convert legacy code to new tech stack single-page web applications.
- You will be working with payment gateway, barcodes, QR codes, biometrics, integrating different hardware (printer, scanner, cash register, POS terminal, laundry and drycleaning machines, conveyor belt etc), map routing and distance algorithms.
What you can expect in the next 12 months
Within 1 month
- You should have acquired a good knowledge of the domain, product and process that we follow.
- Setup the dev environment and push your first small piece of code to production.
- You will start attending daily stand up.
- Dive into technology by pair-programming with your teammates and attending engineering training sessions designed and presented by your peers.
- Have a one-one chat with every member of the Quick Dry Cleaning team so you get to know everyone well and understand each other.
Within 3 months
- You'll start developing your first module all by yourself. (With some guidance)
- Write your first set of unit test cases and work with the quality engineer to set up functional testing workflows.
- Conduct your first review of a peer's code.
- Participate in several release planning sessions to get a deep understanding of the new features we're working on.
- Participate in your first production release cycle.
- Start providing support for bugs, issues, small improvements on production.
- Take over one tool (JIRA, Build Deployment, CI/CD, Git or any other)
Within 6 months
- You'll launch your first two or three modules to production.
- Take architectural and infrastructure decisions that will impact the entire product.
- Be comfortable navigating most of our stack and infrastructure.
- Be responsible for the planning, scoping, design, and implementation of new services.
Within 12 months
- You'll launch at least 3 to 4 core modules to production and completely own scaling for some more.
- Participate in interviewing and hiring, as a way to influence team growth.
- Mentor Ship - Should do code reviews for juniors, spend time and show a willingness to teach juniors, and share expertise with new team members.
- Collaborate with engineering, product, sales and customer success leadership to define priorities and set delivery goals.
- You will start owning an important workflow/ section of the product.
Required Candidate profile
What an ideal candidate looks like
- Strong knowledge of Computer Science fundamentals in object-oriented design, data structures, algorithm design, problem-solving, complexity analysis, databases, networking, and distributed systems.
- 3-4 years of experience with product development (.ASP.Net, C#.Net, WCF, MVC, SQL Server development applications, version control, CI/CD pipelines).
- Should have a strong online presence on different forums like stack overflow etc, Github etc.
- Excellent verbal and written skills. The ability to explain sophisticated architectures to engineers, product managers, and support & operation executives. You should also be able to write a proposal for your idea/solutions and take feedback from the team.
- Previous work experience at a product-based company or startup would be a bonus.
Personality traits we really admire
- Great attitude to ask questions, learn and suggest process improvements.
- Pays great attention to detail and helps identify edge cases.
- Gives equal importance to planning, coding, code reviews, documentation, and testing.
- Highly motivated and coming up with fresh ideas and perspectives to help us move towards our goals faster.
- Follows release cycles and absolute commitment to deadlines.
Role Software Developer
Industry Type IT Services & Consulting
Functional Area IT Software - Application Programming, Maintenance
Employment Type
Role Category Programming & Design



Job Responsibilities
✓ Perform the role of Technical Lead on assigned projects to drive solution design (especially backend) and API services development.
✓ Be the thought leader and champion for above mentioned technologies.
✓ Drive technical analysis for new projects including planning and driving proof-of-concepts, if needed.
✓ Drive tasks related to backend development by providing architectural and technical leadership to mid-tier and database developers.
✓ Conduct peer reviews as the lead into Git to confirmed that developed code meets acceptable standards and guidelines.
✓ Work closely with the rest of the leads, mid-tier development, front-end developers, database developers, etc. to ensure end-to-end integrity of the solution being developed.
✓ Work closely with the rest of the tech leads and senior engineering leadership team to ensure reuse where applicable to increase productivity and throughput.
✓ Conduct technical interview to staff open positions in the backend team.
✓ Delegate work and assignments to team members
✓ Collaborate with their team to identify and fix technical problems
✓ Analyze users' needs and then finding applications to serve them
✓ Drive assigned tasks related to SOC 2 certification and ensure compliance to defined controls for areas under lead’s purview.
✓ Guiding their team through technical issues and challenges
✓ Prepare Technical design documents which would help the team to understand the technical flow
✓ Active participation in customer calls especially discussions related to Technical/Architectural and provide inputs.
Required Experience:
✓ Backend Lead around 14 years of experience
✓ Server less Computing Architecture
✓ NodeJS, MySQL, Jenkins, Python, GitLab Technologies
✓ Good knowledge of AWS Cloud
✓ Full cycle AWS implementation experience
✓ Project experience in development and maintenance support for AWS web service and Cloudbased implementations
✓ Experience leading teams of up to 10 + professionals
Ability to manage multiple tasks and projects in a fast-moving environment
Educational Qualifications:
Engineering graduate or B. Tech/MCA with relevant major subjects like Computer Science
About Us:
We live in the realm of rising, ever evolving technology. From the cassette tape to nanotech skeletons, we’ve come a long way. Splitting an atom is no big deal – and who’d have thought that we’d talk through airwaves? Change is inevitable, and we’re the tide that brings it.
At Relinns, we breathe tech solutions and embrace innovation with open arms. With over 4 years of experience, we've had phenomenal growth which acts as a testimony to the knowledge we've come to gather over time. We have been fortunate enough to get an opportunity to work with clients such as Apollo Tyres, Shahi Exports, Manchester City Football Club, etc.
Our team is on the path of finding religion in the workplace of today. To find our way, we have three tools at our disposal: cutting edge technological tools, a meticulously dedicated work ethic, and crisp bow-tie professionalism.
About the role:
We are looking for a Node.js Developer responsible for managing the interchange of data between the server and the client. Your primary focus will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to requests from the client end. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of front-end technologies is necessary as well.
What You Need for this Position:
- Strong understanding of JavaScript, its quirks and workarounds.
- Basic understanding of TypeScript & its quirks and workarounds.
- Advanced knowledge of NPM and the most frequently used libraries (E.g: Socket.io, Underscore.js, Passport, etc.).
- Knowledge of any Node.JS ORM (Mongoose, Sequlize, Knex, etc.).
- Understanding of any of the following Node.JS frameworks: Express JS, Koa Js, Hapi JS or any other.
- Good understanding of OOP and data structures.
- JavaScript unit testing frameworks (prefer Unit.js, Mocha).
- Ability to write complex algorithms.
- Understanding of code versioning tools, such as {GIT / Mercurial / SVN}.
What You Will Be Doing:
- Develop and provide solutions on JavaScript frameworks
- Develop high-traffic, flawless web applications using Node.JS
- Participate in code and design reviews to ensure consistency in architecture and design/code practice
- Code with performance, scalability, and usability in mind
- Work on new tools in leading industry trends, with new and emerging technologies, prototypes and engineering process improvements
- Work closely with next-generation architecture development teams using cutting edge approaches and technologies
Top Reasons to Work with Us:
- We're a small, fast-paced growing team tackling huge new challenges every day.
- Learning new concepts while working with intellectual and exceptionally talented team
- Friendly and high growth work environment
- Competitive compensation


About Merchandise Operation (Merch Ops): Merchandise Operations (Merch Ops) is a merchandise management system, it is positioned as a host system in the retail solutions, it has ability to maintain the Master/Foundation data, create and manage Purchase Orders, create, and manage Prices & Promotions, perform Replenishment, effective inventory control and financial management. Merc Ops provides Business users with consistent, accurate, and timely data across an enterprise by allowing them to get the:
Right Goods in the...
• Right Silhouettes, Sizes and Colors; at the...
• Right Price; at the...
• Right Location; for the...
• Right Consumer; at the...
• Right Time; at the...
• Right Quantity.
About Team:
• Proven, passionate bunch of disruptors providing solutions that solve real-time supply chain problems.
• Well mixed experienced team with young members and experienced in product, domain, and Industry knowledge.
• Gained Expertise in designing and deploying massively scalable cloud native SaaS products
• The team currently comprises of associates across the globe and is expected to grow rapidly.
Our current technical environment:
• Software: React JS, Node JS, Oracle PL/SQL, GIT, Rest API. Java script.
• Application Architecture: Scalable three tier web application.
• Cloud Architecture: Private cloud, MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD) • Frameworks/Others: Tomcat Apache, RDBMS, Jenkins, Nginx, Oracle Type ORM, Express.
What you'll be doing:
• As a Staff Engineer you will be responsible for the design of the features in the product roadmap
• Creating and encouraging good software development practices engineering-wide, driving strategic technical improvements, and mentoring other engineers.
• You will write code as we expect our technical leadership to be in the trenches alongside junior engineers, understanding root causes and leading by example
• You will mentor engineers
• You will own relationships with other engineering teams and collaborate with other functions within Blue Yonder
• Drive architecture and designs to become simpler, more robust, and more efficient.
• Lead designs discussion and come up with robust and more efficient designs to achieve features in product roadmap
• Take complete responsibility of the features developed right from coding till deployments
• Introduce new technology and tools for the betterment of the product
• Guides fellow engineers to look beyond the surface and fix the root causes rather than symptoms.
What we are looking for:
• Bachelor’s degree (B.E/B.Tech/M.Tech Computer science or related specialization) and minimum 7 to 10 years of experience in Software development, has been an Architect, within the last 1-2 years minimum. • Strong programming experience and background in Node JS and React JS.
• Hands-on development skills along with architecture/design experience.
• Hands-on experience on designing, building deploying and maintenance of enterprise cloud solutions.
• Demonstrable experience, thorough knowledge, and interests in Cloud native architecture, Distributed micro-services, Multi-tenant SaaS solution and Cloud Scalability, performance, and High availability
• Experience with API management platforms & providing / consuming RESTful APIs
• Experience with varied tools such as REST, Hibernate, RDBMS, Docker, Kubernetes, Kafka, React.
• Hands-on development experience on Oracle PL/SQL.
• Experience with DevOps and infrastructure automation.

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
- Parse data using Python, create dashboards in Tableau.
- Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
- Migrate Datastage jobs to Snowflake, optimize performance.
- Work with HDFS, Hive, Kafka, and basic Spark.
- Develop Python scripts for data parsing, quality checks, and visualization.
- Conduct unit testing and web application testing.
- Implement Apache Airflow and handle production migration.
- Apply data warehousing techniques for data cleansing and dimension modeling.
Requirements:
- 4+ years of experience as a Platform Engineer.
- Strong Python skills, knowledge of Tableau.
- Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
- Proficient in Unix Shell Scripting and SQL.
- Familiarity with ETL tools like DataStage and DMExpress.
- Understanding of Apache Airflow.
- Strong problem-solving and communication skills.
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
About the Company
Blue Sky Analytics is a Climate Tech startup building an API-based catalogue of Environmental Datasets by leveraging Satellite data, AI, and the cloud.
We are looking for a Project Manager for managing our engagement with Climate TRACE, a coalition founded by former United States VP Al Gore - YES, you read that right and YES you might get a chance to interact with him as part of this role.
Climate TRACE is a coalition created to make an open emissions inventory of the world’s first comprehensive accounting of greenhouse gas (GHG) based primarily on direct, independent observation. The vision of Climate TRACE is to make meaningful climate action faster and easier by mobilizing the global tech community to track greenhouse gas (GHG) emissions with unprecedented detail and speed. Climate TRACE members include (carbon)plan, Earthrise Media, Hudson Carbon, Hypervine, Johns Hopkins University Applied Physics Laboratory, OceanMinds, RMI, TransitionZero, WattTime and many more! You will be driving the partnership between Blue Sky Analytics and Climate TRACE!
Your Role
- Maintain and foster relationships with all members of Climate TRACE.
- Rapport building and harvest new business avenues.
- Represent Blue Sky Analytics in various events.
- Participate in campaigns or new initiatives to push forth Climate TRACE and Blue Sky’s mission and vision.
- Ensures adherence to legally binding requirements.
- Plan and lead meetings, which may include internal and external stakeholders.
- Make proposals with timelines and ensure that the team meets the deadlines.
- Identify key stakeholders ensuring the establishment and maintaining positive business relationships among stakeholders (internal or external)
- Maintain regular communication with the coalition members.
- Define activities, responsibilities, critical milestones, resources skills needs, interfaces, and budget. Optimize costs and time utilization minimize waste and deliver projects on time on budget as per the contract and agreed on the scope with a high-quality result.
- Anticipate all possible risks and manage them by applying a suitable risk management strategy; while developing contingency plans.
- Track and report project KPIs and analyze project health.
- Effective implementation of software delivery methodologies to improve project KPIs.
- Provide individual and team mentoring; ensuring high levels of team engagement and developing capabilities within the team.
- Adopt and build software engineering best practices that can be leveraged by other teams.
Requirements
- Excellent proven team management skills.
- Excellent written and oral communication skills.
- Solid organizational skills including attention to detail and multi-tasking skills.
- Minimum of 2-4 years experience in partnership and stakeholder management.
- Understanding the data ecosystem.
- Experience in strategy building will be a plus.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension-free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.



