About Niki.ai
About
Connect with the team
Similar jobs
About TensorIoT
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling. complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
Job description
As a Mid-Level Python Developer, you will:
- Analyze user needs and develop software solutions.
- Work with project managers and product owners to meet specification needs.
- Recommend software upgrades to optimize operational efficiency.
- Deliver scalable and responsive software using TypeScript and Python.
- Collaborate with other developers to design and optimize code.
- Create flowcharts and user guides for new and existing programs.
- Document all programming tasks and procedures.
- Perform routine software maintenance.
- Deploy and maintain CI/CD pipelines.
- Develop and maintain data pipelines. This includes scaling the pipeline to accommodate anticipated volume and complexity.
- Collaborate with external clients and internal team members to meet product deadlines.
We're looking for someone who has:
- Experience with AWS Services(must)
- A bachelor’s degree in computer science, Engineering, or related fields
- 4 - 8 years of experience in software development, computer engineering, or other related fields
- Expert-level experience with Python and Node.JS
- Familiarity and comfort with REST APIs
- A deadline and detail-oriented mindframe
- Strong analytical and critical thinking skills
- Familiarity with DevOps tools and best practices
- Experience developing scalable data processing systems
Bonus points for someone with:
- Experience with IoT, ML, AI, or VR
- Amazon Web Services (AWS) certification(s) (preferred)
- Experience with microcomputers and microcontrollers
- Experience with the following DevOps services: AWS
- CodePipeline, CodeBuild or CodeCommit
- Experience with the following Data Engineering services: AWS Lake Formation, Glue, Redshift, EMR, or QuickSight.
- Strong experience in Docker.
- Good knowledge with any of the Cloud Platform like Azure.
- Must be comfortable working in a Linux environment.
- Must have exposure into IOT domain and its protocols ((Zigbee,BLE ,LoRa, Modbus)
- Must be a good team player.
- Strong Communication Skills
Your Opportunity
You will primarily be responsible for implementing features & building platforms to enable on-device and device/server-side combined ML workflows. You will possess strong skills in object-oriented software design and programming. You are excited about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design.
Your Impact
To design, architect, develop and maintain the backend for AI products by working closely with the Engineering Team. To ensure successful consumption of Saarthi.ai technology by APIs, SDKs, or Applications, thereby enabling productization and monetization of the AI solutions.
What You’ll Do
· Work with development teams and product managers to ideate software solutions
· Design server-side architecture
· Along with SAAS application deliverables even the client-side deliverables need to be taken care
· Develop and manage well-functioning databases and applications
· Write effective APIs
· Test software to ensure responsiveness and efficiency
· Troubleshoot, debug and upgrade software
· Create security and data protection settings
· Build features and applications with a mobile responsive design
· Write technical documentation
· Work with data scientists and analysts to improve software
What You Bring
· Proven experience as a Back-End Developer or similar role.
· Experience developing desktop applications.
· Strong working experience with back-end development using Node.JS, JavaScript.
· Strong knowledge of databases (MongoDB, PostgreSQL).
· Experience with System Design and Architecture.
· Familiarity with common stacks.
· Familiarity with Parallel Threading, Concurrent calling and Aggregation Queries.
· Ability to write quality unit tests
· Setting up CI/CD, and integrating with logging and monitoring systems for the products or platforms
· Excellent communication and teamwork skills.
· Degree in Computer Science, Statistics or relevant field
- 5 years of experience as Java/Jee Developer, Springboot
- Good knowledge of OOPS concepts.
- Experience in Java8, JSP, Spring Core, Spring MVC, Spring Rest & Spring JPA Repository
- Experience in Hibernate, relational databases and sql.
- Experience in Rest API development.
- Experience in implementation of Jasper Reports
- Familiar with Git & Maven
Roles and Responsibilities |
● Leads more than one projects end-to-end and collaborates across functions. Drives planning, estimation and execution. ● Manages stakeholder expectations and offers scalable, reliable, performant and easy to maintain solutions ● Consistently delivers complex, well backed and bug-free products in time ● Consistently takes well thought technical/design decisions ● Develops expertise in more than one area and shares knowledge with others. able to mentor/train in areas which are new to them. ● Drives people to solve engineering challenges ● Enjoys high respect of Tech and other cross functional teams ● Demonstrates effective communication with project team, management and internal/external clients as necessary. ● Surfaces both technical and non-technical team challenges and helps resolve them ● Champion for SDLC best practices and high quality standards ● Significantly contributes in hiring high performance candidates |
Experience &
Skills |
● Expert at in RoR, Golang, NodeJS or Python. Good to have exposure to ML. ● Must have experience in cloud computing ● Operates independently with almost no oversight ● Is able to apply domain expertise to think critically and make wise decisions for the team, taking into account tradeoffs and constraints. ● Communicates tech decisions through design docs and tech talks ● Has delivered multiple projects with end-to-end engineering ownership ● Keeps track of new technology/tools and embraces them as necessary ● 12+ years of experience in product driven organization ● A Bachelors or Masters degree in engineering from a reputed institute (preferably IITs, NITs, or other top engineering institutes) |
Senior Software Engineer (Python)
Job description
Fulfil’s software engineers develop the next-generation technologies that change how millions of customer orders are fulfilled by merchants. Our products need to handle information at massive scale. We're looking for engineers who bring fresh ideas from all areas into our technology.
As a senior software engineer, you will work on our python based ORM and applications that scales to handle millions of transactions every hour. This is mission critical software and your primary focus will be building robust and scalable solutions that are easy to maintain.
In this role, you will be collaborating closely with the rest of the team working on different layers of infrastructure in an international environment. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product are important.
What You’ll Do:
- Own definition and implementation of API interfaces (REST and GraphQL). We take pride in our 100% open API with over 600 endpoints.
- Implement simple solutions to complex business logic that enables our merchants to manage financials, orders and shipments across millions of transactions.
- Build reusable components and packages for future use.
- Translate specs and user stories into reviewable, test covered patches.
- Peer review code and refactor existing code.
- Integrate with our eCommerce partners (Shopify, BigCommerce, Amazon), shipping partners (UPS, USPS, FedEx, DHL) and EDI.
- Manage Kubernetes and Docker based global deployment of our infrastructure.
Requirements
We’re Looking for Someone With:
- Experience working with ORMs like SQLAlchemy or Django
- Experience with SQL and databases (Postgres preferred)
- Experience in developing large server side applications and microservices
- Ability to create high quality code
- Experience with python testing tools (pytest) and test automation
- Familiarity with code versioning tools like GIT
- Strong sense of ownership and leadership quality
- Experienced in the tools of our web stack
- Python
- Celery
- Postgres
- Redis
- RabbitMQ
Nice to Haves:
- Prior experience at a growth stage Internet/Software company
- Experience with ReactJS, Google Cloud, Heroku
- Cloud deployment and scaling experience
About Us:
Fulfil.io helps high growth, high volume merchants simplify operations and scale for growth. With the rise in omni-channel commerce, Fulfil was founded with the simple idea that merchant operations need to be simplified in order to deliver amazing retail experiences. Fulfil enables businesses to turn their back office operations into an accelerator for growth by integrating order management, inventory management, warehouse management, vendor/supplier management, wholesale, manufacturing, financials and customer service, into one seamless solution. We believe merchants should love their operations platform, and we work hard to make that happen every single day. Fulfil.io is a trusted solution for brands like EndySleep, Mejuri, Lie-Nielson Toolworks, and many more.
Fulfil.io is a venture backed technology company with offices in San Francisco, Toronto, and Bangalore. The team is made up of people who want to feel challenged at work, be the best at their craft and learn from one another. We come from different backgrounds and experiences, all passionate about the work we do, the team we do it with, and the customers we do it for. Join us in our journey to simplify operations and empower merchants around the world!
- Developing and maintaining all server-side network components.
- Ensuring optimal performance of the central database and responsiveness to front-end requests.
- Collaborating with front-end developers on the integration of elements.
- Designing customer-facing UI and back-end services for various business processes.
- Developing high-performance applications by writing testable, reusable, and efficient code.
- Documenting Node.js processes, including database schemas, as well as preparing reports.
- Keeping informed of advancements in the field of Node.js development.
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team