
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 96 hours (once the screening questions are answered)
About Nanonets :
Nanonets is a startup headquartered in the San Francisco Bay Area, solving real-world business problems with cutting-edge deep learning. We are backed by prestigious investors from Silicon Valley, such as Y-Combinator (Sam Altman was our group partner at YC), SV Angels, and Elevation Capital. Our product automates complex business processes involving unstructured data, using deep learning to convert it into a structured format and connect multiple applications with each other, all in an automated manner.
Since 2021, we have been building and using large-scale multimodal architectures in deep learning, such as GPT-4, which have gained popularity in recent times. Some of the recent work we are doing involves using these architectures to automate building workflows that will completely replace RPA as an industry.
If you are looking to work at a startup with really smart colleagues, working on state-of-the-art deep learning architectures, solving real-world problems, and have product-market fit with rapidly growing customers/revenue, Nanonets would be an ideal place for you!
Job Description
The role can be summed up as building and deploying cutting edge generalised deep learning architectures that can solve complex business problems like converting unstructured data into structured format without hand-tuning features/models. You are expected to build state of the art models that are best in the world for solving these problems, continuously experimenting and incorporating new advancements in the field into these architectures.
What We Expect From You
- Strong Machine Learning concepts
- Strong command in low-level operations involved in building architectures like Transformers, Efficientnet, ViT, Faster-rcnn, etc., and experience in implementing those in pytorch/jax/TensorFlow
- Experience with the latest semi-supervised, unsupervised and few-shot architectures in Deep Learning methods in the NLP/CV domain
- Strong command in probability and statistics
- Strong programming skills
- Have previously shipped something of significance, either implemented some paper or made significant changes in an existing architecture etc
Interesting Projects Other Senior DL Engineers Have Completed
- Deployed large-scale multi-modal architectures that can understand both text and images well
- Built an auto-ML platform that can automatically select best architecture, fine-tuning method based on type and amount of data
- Best in the world models to process documents like invoices, receipts, passports, driving licenses, etc
- Hierarchical information extraction from documents. Robust modeling for the tree-like structure of sections inside sections in documents
- Extracting complex tables — wrapped around tables, multiple fields in a single column, cells spanning multiple columns, tables in warped images, etc.
- Enabling few-shots learning by SOTA finetuning techniques

Similar jobs
Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote
Job Description-
Responsibilities:
* Work on real-world computer vision problems
* Write robust industry-grade algorithms
* Leverage OpenCV, Python and deep learning frameworks to train models.
* Use Deep Learning technologies such as Keras, Tensorflow, PyTorch etc.
* Develop integrations with various in-house or external microservices.
* Must have experience in deployment practices (Kubernetes, Docker, containerization, etc.) and model compression practices
* Research latest technologies and develop proof of concepts (POCs).
* Build and train state-of-the-art deep learning models to solve Computer Vision related problems, including, but not limited to:
* Segmentation
* Object Detection
* Classification
* Objects Tracking
* Visual Style Transfer
* Generative Adversarial Networks
* Work alongside other researchers and engineers to develop and deploy solutions for challenging real-world problems in the area of Computer Vision
* Develop and plan Computer Vision research projects, in the terms of scope of work including formal definition of research objectives and outcomes
* Provide specialized technical / scientific research to support the organization on different projects for existing and new technologies
Skills:
* Object Detection
* Computer Science
* Image Processing
* Computer Vision
* Deep Learning
* Artificial Intelligence (AI)
* Pattern Recognition
* Machine Learning
* Data Science
* Generative Adversarial Networks (GANs)
* Flask
* SQL
- Lead the data science, ML, product analytics, and insights functions by translating sparse and decentralized datasets to develop metrics, standardize processes, and lead the path from data to insights.
- Building visualizations, models, pipelines, alerts/insights systems, and recommendations in Python/Java to support business decisions and operational experiences.
- Advising executives on calibration strategy, DEI, and workforce planning.
- Architecting end-to-end prediction pipelines and managing them
- Scoping projects and mentoring 2-4 people
- Owning parts of the AI and data infrastructure of the organization
- Develop state-of-the-art deep learning/classical models
- Continuously learn new skills and technologies and implement them when relevant
- Contribute to the community through open-source, blogs, etc.
- Take a number of high-quality decisions about infrastructure, pipelines, and internal tooling.
What are we looking for
- Deep understanding of core concepts
- Broader knowledge of different types of problem statements and approaches
- Great hold on Python and the standard library
- Knowledge of industry-standard tools like scikit-learn, TensorFlow/PyTorch, etc.
- Experience with at least one among Computer Vision, Forecasting, NLP, or Recommendation
Systems a must
- A get shit done attitude
- A research mindset and a creative caliber to utilize previous work to your advantage.
- A helping/mentoring first approach towards work
Lead Machine Learning Engineer
About IDfy
IDfy is ranked amongst the World's Top 100 Regulatory Technology companies for the last two years. IDfy's AI-powered technology solutions help real people unlock real opportunities. We create the confidence required for people and businesses to engage with each other in the digital world. If you have used any major payment wallets, digitally opened a bank account , have used a self-drive car, have played a real-money online game, or hosted people through AirBnB, it's quite likely that your identity has been verified through IDfy at some point.
About the team
- The machine learning team is a closely knit team responsible for building models and services that support key workflows for IDfy.
- Our models are critical for these workflows and as such are expected to perform accurately and with low latency. We use a mix of conventional and hand-crafted deep learning models.
- The team comes from diverse backgrounds and experience. We respect opinions and believe in honest, open communication.
- We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
About the role
In this role you will:
- Work on all aspects of a production machine learning platform: acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
- Work on performance tuning of models
- From time to time work on support and debugging of these production systems
- Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
- Building workflows for training and production systems
- Contribute to documentation
While the emphasis will be on researching, building and deploying models into production, you will be expected to contribute to aspects mentioned above.
About you
- You are a seasoned machine learning engineer (or data scientist). Our ideal candidate is someone with 8+ years of experience in production machine learning.
Must Haves
- You should be experienced in framing and solving complex problems with the application of machine learning or deep learning models.
- Deep expertise in computer vision or NLP with the experience of putting it into production at scale.
- You have experienced that and understand that modelling is only a small part of building and delivering AI solutions and know what it takes to keep a high-performance system up and running.
- Managing a large scale production ML system for at least a couple of years
- Optimization and tuning of models for deployment at scale
- Monitoring and debugging of production ML systems
- An enthusiasm and drive to learn, assimilate and disseminate the state of the art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
- Past experience of mentoring junior colleagues
- Knowledge of and experience in ML Ops and tooling for efficient machine learning processes
Good to Have
- Our stack also includes languages like Go and Elixir. We would love it if you know any of these or take interest in functional programming.
- We use Docker and Kubernetes for deploying our services, so an understanding of this would be useful to have.
- Experience in using any other platform, frameworks, tools.
Other things to keep in mind
- Our goal is to help a significant part of the world’s population unlock real opportunities. This is an opportunity to make a positive impact here, and we hope you like it as much as we do.
Life At IDfy
People at IDfy care about creating value. We take pride in the strong collaborative culture that we have built, and our love for solving challenging problems. Life at IDfy is not always what you’d expect at a tech start-up that’s growing exponentially every quarter. There’s still time and space for balance.
We host regular talks, events and performances around Life, Art, Sports, and Technology; continuously sparking creative neurons in our people to keep their intellectual juices flowing. There’s never a dull day at IDfy. The office environment is casual and it goes beyond just the dress code. We have no conventional hierarchies and believe in an open-door policy where everyone is approachable.
Responsibilities
- Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
- Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
- Entrepreneurial and out-of-box thinking essential for a technology startup
- Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability
Requirements
- In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
- Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
- Ability to understand, optimize and debug imaging algorithms
- Understating and experience in openCV library
- Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
- Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
- Understanding of engineering principles and a clear understanding of data structures and algorithms
- Experience in writing production level codes using either C++ or Java
- Experience with technologies/libraries such as python pandas, numpy, scipy
- Experience with tensorflow and scikit.
- Required to work individually or as part of a team on data science projects and work closely with lines of business to understand business problems and translate them into identifiable machine learning problems which can be delivered as technical solutions.
- Build quick prototypes to check feasibility and value to the business.
- Design, training, and deploying neural networks for computer vision and machine learning-related problems.
- Perform various complex activities related to statistical/machine learning.
- Coordinate with business teams to provide analytical support for developing, evaluating, implementing, monitoring, and executing models.
- Collaborate with technology teams to deploy the models to production.
Key Criteria:
- 2+ years of experience in solving complex business problems using machine learning.
- Understanding and modeling experience in supervised, unsupervised, and deep learning models; hands-on knowledge of data wrangling, data cleaning/ preparation, dimensionality reduction is required.
- Experience in Computer Vision/Image Processing/Pattern Recognition, Machine Learning, Deep Learning, or Artificial Intelligence.
- Understanding of Deep Learning Architectures like InceptionNet, VGGNet, FaceNet, YOLO, SSD, RCNN, MASK Rcnn, ResNet.
- Experience with one or more deep learning frameworks e.g., TensorFlow, PyTorch.
- Knowledge of vector algebra, statistical and probabilistic modeling is desirable.
- Proficiency in programming skills involving Python, C/C++, and Python Data Science Stack (NumPy, SciPy, Pandas, Scikit-learn, Jupyter, IPython).
- Experience working with Amazon SageMaker or Azure ML Studio for deployments is a plus.
- Experience in data visualization software such as Tableau, ELK, etc is a plus.
- Strong analytical, critical thinking, and problem-solving skills.
- B.E/ B.Tech./ M. E/ M. Tech in Computer Science, Applied Mathematics, Statistics, Data Science, or related Engineering field.
- Minimum 60% in Graduation or Post-Graduation
- Great interpersonal and communication skills
Responsibilities:
- Identify complex business problems and work towards building analytical solutions in-order to create large business impact.
- Demonstrate leadership through innovation in software and data products from ideation/conception through design, development and ongoing enhancement, leveraging user research techniques, traditional data tools, and techniques from the data science toolkit such as predictive modelling, NLP, statistical analysis, vector space modelling, machine learning etc.
- Collaborate and ideate with cross-functional teams to identify strategic questions for the business that can be solved and champion the effectiveness of utilizing data, analytics, and insights to shape business.
- Contribute to company growth efforts, increasing revenue and supporting other key business outcomes using analytics techniques.
- Focus on driving operational efficiencies by use of data and analytics to impact cost and employee efficiency.
- Baseline current analytics capability, ensure optimum utilization and continued advancement to stay abridge with industry developments.
- Establish self as a strategic partner with stakeholders, focused on full innovation system and fully supportive of initiatives from early stages to activation.
- Review stakeholder objectives and team's recommendations to ensure alignment and understanding.
- Drive analytics thought leadership and effectively contributes towards transformational initiatives.
- Ensure accuracy of data and deliverables of reporting employees with comprehensive policies and processes.

