
About Leaf Innovation Pvt Ltd
About
LEAF is safety technology company started from February 2015 with the mission to solve the problem of human safety by disrupting the largely unorganized security market. LEAF launched their first product Safer, a smart safety wearable in 2015 to solve the problem of women safety which was sold in more than 25 countries. LEAF was a part of Prime Minister Modi's delegation to silicon valley as the Top 10 Innovators of India. With first Indian company to win prestigious Million Dollar X Prize Award at United Nations, Leaf is on the mission to make Billion people safe.
Connect with the team
Company social profiles
Similar jobs



- At least one year experience with PHP or NodeJS.
- Experience with at least one Relational database like MongoDB, MySQL or NoSQL
- Experience with multiple PHP frameworks will be plus(e.g Wordpress, Laravel, Codeigniter,Symfony, cakePHP)
- Proficiency with multiple PHP Frameworks.
- Proficiency designing and building APIs.
- Comfortable in a work environment that requires strong problem-solving skills and independent self-direction coupled with team collaboration and open communication
- Thorough knowledge of Ajax, Javascript, HTML,CSS and other web technologies XHTML and HTML5 coding is desirable.
- Experience in both Linux and Windows environment development. Understanding of RESTful web services.
- Work in a fast-paced, creative atmosphere to develop new ideas that adapt to evolving user needs
- Experience using Git / GitHub. Excellent communication skills



At Ocean Friends, we are looking to positively impact the lives of families dealing with chronic health conditions through technology. We are looking for self-driven, and motivated Software Engineers with intent and ability to work across a variety of technology problems, including, but not limited to Mobile app development, AI/ML, and User experience.


Stealth Fintech startup looking for Software Engineer with 2+ year of experience.
We are looking to hire few engineers for this position ASAP.
Well funded !
Permanent Remote Position !
Competitive Salary !
You will be a part of the founding engeering team. We are working cutting edge technology stack such as Cassandra, Terraform, Kubernetes, Redis, MongoDB, InfluxDB, Graphana, GoLang, AWS.
You will get the opportunity to work on a massive scale project, crawling 100+ Million pages per day, and very complex problems which will help you grow as an engineer.
Requirements:
- Past work experience with crawler / scraper is a MUST
- Self starter mentality who can pick up new skills & can work independently
- Experience with programming languages like Java, C#, Go, Python, PHP
- Understanding of concepts like HTTP, Sessions / Cookies, IP rotation
- Expereince with AWS or Google Cloud
- Familiar with multithreading architecture
Responsibilities:
- Ability to work in an existing codebase and collaborate with a diverse team
- Experience in building enterprise-scale backend REST APIs with frameworks such as Nest.js & Express.js using an API-first paradigm
- Intimate knowledge of crafting highly performing database queries
- Hands-on experience implementing relational database structures, including tables, indexes, views, etc.
- A mindset towards building systems for the cloud and DevOps fundamentals
- Working knowledge of cloud infrastructure services is good to have. If not then willingness to learn should be there.
- Focus towards building security, performance, and scalability into services from the beginning
- Experience with debugging code and troubleshooting technical issues in order to craft appropriate solutions
What You'll Do :
You will be a part of our backend team working on keeping our REST API and GraphQL API up and running and making sure that our users get the right data at the right time.
You will have ownership of developing and maintaining our backend services including users, courses, and operations tools that manage our product and logistics.
You will work on architecting and scaling highly-available RESTful services and back-end systems from scratch.
This is a position for an experienced Node programmer with at least 2 years under the belt, but you don't have to be a rock star, a ninja, or a superhero to apply.
What You'll Need :
You will fit well in the backend team if you are passionate about technology and have experience programming in Node. Knowledge of technologies like Cassandra, Elasticsearch, PostgreSQL, REST and JSON will help you get going from day one.
As an experienced Node developer, you should be intimately familiar with the platform, with
JavaScript, and with the stables of full-stack web development : HTTP, JavaScript, CSS, HTML, SQL.
It's a bonus if you're broadly familiar with other languages as well- we write some services in Go and have data pipelines written in Python- but your main work will be JavaScript through and through.
You will be a perfect match with our team if you love collaborating with people from all disciplines to solve complex problems, always want to learn new skills and take ownership of your work.

Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team




