
Company is a remote-first team, looking to scale our Open Source Feature
Flagging, Remote Config and A/B Testing product. With both SaaS and on-premises customers, we are investing into our globally distributed, low latency API and making it simple for customers to deploy on their own infrastructure via Kubernetes and OpenShift.
We are looking for a Lead Server-Side Engineer, who also enjoys working on dev-ops, infrastructure and orchestration, and can help build out our existing API and infrastructure.
We are a really small team right now, and this is our first technical hire outside of the founders, so it’s a great opportunity to be part of something that is looking to scale quickly!
We are looking for someone who loves working with:
- Python. Django Rest Framework experience would be great, but not essential! We prioritise
- quality over quantity.
- Postgres with InfluxDB, Oracle, MySQL and Redis experience a plus.
- Docker, Kubernetes, Helm, OpenShift and associated tooling.
- AWS, especially ECS, Lambda, RDS and DynamoDB. Performance and uptime are super
- important to us.
- The challenge of scaling a global, distributed API to 10,000+ requests per second.
- We have SDKs in a bunch of languages, so the more polyglot you are the better.
- If you like writing JS and React that would be awesome too.
We are a 100% remote team, currently based on the West Coast US and in Europe

Similar jobs
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What We Expect:
• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
• You should have 3–4 years of experience in Python-based development and be eager to solve complex
performance and scalability challenges in trading and fintech applications.
• You measure success by your own growth, not external validation.
• You thrive on challenges, not on perks or financial rewards.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading
environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What You Will Do:
• Develop and maintain scalable backend systems using Python.
• Design and implement REST APIs and socket-based communication.
• Optimize code for speed, performance, and reliability.
• Collaborate with frontend teams to integrate server-side logic.
• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.
• Build fault-tolerant, multi-producer/consumer systems.
Must-Have Skills:
• 3–4 years of experience in Python and backend development.
• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).
• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.
• Bachelor’s degree in Computer Science or related field.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Experience with GoLang, C/C++, Erlang, or Elixir.
• Exposure to trading, fintech, or low-latency systems.
• Familiarity with microservices and CI/CD pipelines.
Job Description:
Summary
The Data Engineer will be responsible for designing, developing, and maintaining the data infrastructure. They must have experience with SQL and Python.
Roles & Responsibilities:
● Collaborate with product, business, and engineering stakeholders to understand key metrics, data needs, and reporting pain points.
● Design, build, and maintain clean, scalable, and reliable data models using DBT.
● Write performant SQL and Python code to transform raw data into structured marts and reporting layers.
● Create dashboards using Tableau or similar tools.
● Work closely with data platform engineers, architects, and analysts to ensure data pipelines are resilient, well-governed, and high quality.
● Define and maintain source-of-truth metrics and documentation in the analytics layer.
● Partner with product engineering teams to understand new features and ensure appropriate
instrumentation and event collection.
● Drive reporting outcomes by building dashboards or working with BI teams to ensure timely delivery of insights.
● Help scale our analytics engineering practice by contributing to internal tooling, frameworks, and best practices.
Who You Are:
Experience : 3 to 4 years of experience in analytics/data engineering, with strong hands-on expertise in DBT, SQL, Python and dashboarding tools.
● Experience working with modern data stacks (e.g., Snowflake, BigQuery, Redshift, Airflow).
● Strong data modeling skills (dimensional, star/snowflake schema, data vault, etc.).
● Excellent communication and stakeholder management skills.
● Ability to work independently and drive business outcomes through data.
● Exposure to product instrumentation and working with event-driven data is a plus.
● Prior experience in a fast-paced, product-led company is preferred.
Mumbai malad work from office
6 Days working
1 & 3 Saturday off
AWS Expertise: Minimum 2 years of experience working with AWS services like RDS, S3, EC2, and Lambda.
Roles and Responsibilities
1. Backend Development: Develop scalable and high-performance APIs and backend systems using Node.js. Write clean, modular, and reusable code following best practices. Debug, test, and optimize backend services for performance and scalability.
2. Database Management: Design and maintain relational databases using MySQL, PostgreSQL, or AWS RDS. Optimize database queries and ensure data integrity. Implement data backup and recovery plans.
3. AWS Cloud Services: Deploy, manage, and monitor applications using AWS infrastructure. Work with AWS services including RDS, S3, EC2, Lambda, API Gateway, and CloudWatch. Implement security best practices for AWS environments (IAM policies, encryption, etc.).
4. Integration and Microservices:Integrate third-party APIs and services. Develop and manage microservices architecture for modular application development.
5. Version Control and Collaboration: Use Git for code versioning and maintain repositories. Collaborate with front-end developers and project managers for end-to-end project delivery.
6. Troubleshooting and Debugging: Analyze and resolve technical issues and bugs. Provide maintenance and support for existing backend systems.
7. DevOps and CI/CD: Set up and maintain CI/CD pipelines. Automate deployment processes and ensure zero-downtime releases.
8. Agile Development:
Participate in Agile/Scrum ceremonies such as daily stand-ups, sprint planning, and retrospectives.
Deliver tasks within defined timelines while maintaining high quality.
Required Skills
Strong proficiency in Node.js and JavaScript/TypeScript.
Expertise in working with relational databases like MySQL/PostgreSQL and AWS RDS.
Proficient with AWS services including Lambda, S3, EC2, and API Gateway.
Experience with RESTful API design and GraphQL (optional).
Knowledge of containerization using Docker is a plus.
Strong problem-solving and debugging skills.
Familiarity with tools like Git, Jenkins, and Jira.
Proficient in Golang, Python, Java, C++, or Ruby (at least one)
Strong grasp of system design, data structures, and algorithms
Experience with RESTful APIs, relational and NoSQL databases
Proven ability to mentor developers and drive quality delivery
Track record of building high-performance, scalable systems
Excellent communication and problem-solving skills
Experience in consulting or contractor roles is a plus
Senior Software Engineer
Gocomet
**Desired Candidate**
- The ideal candidate is a self-motivated, multi-tasker, and demonstrated team-player. You will be a senior developer responsible for the development of new software products and enhancements to existing products.
- You should excel in working with large-scale applications and frameworks and have outstanding communication and leadership skills.
**Responsibilities**
- Writing clean, high-quality, high-performance, maintainable code
- Develop and support software including applications, database integration, interfaces, and new functionality enhancements
- Own and complete full projects beginning with identifying and communicating the problems to be solved, getting and incorporating feedback on proposed architectural solutions, and making a final decision as the owner of a project.
- Show curiosity to not only learn new things but fully understand how they work
- Be highly productive - have a reputation for getting things done quickly and efficiently
- Be a mentor for other engineers
- Deconstruct a problem into an executable action plan for themselves and other engineers - also perform them in a high-quality way without issue
- Set and maintain high individual and team expectations
- Actively participate in frequent code/design/architecture reviews
- Be able to communicate well with all engineers regardless of seniority
- Generate support for a company/team decision
**Requirements**
- At least 2 years experience in Development with extensive experience using Ruby/Golang/Python/Nodejs.
- Excellent understanding of Object Oriented Programming
- Ability to self-manage and work autonomously in a collaborative environment
- A focus on detail including around automated tests and documenting your code
- An agile mindset and the ability to adapt to changing priorities and requirements
- Good in analyzing and solving problems
- Passionate to work in a start-up
**What you will get**
- Product ownership - take autonomy over core products & product features
- Be a part of early tech team
- Stock options
**Our stack**
Microservice Architecture, Kubernetes, PostgreSQL, MongoDB, Redis, Ruby on Rails, ReactJs, Nodejs, Jenkins, RabbitMQ, Flutter, Apache Kafka
Our continuous releases are integrated with Jenkins, Bitbucket & Kubernetes. On the frontend, we use React for the views, organize the data flow with Flux architecture, and test our application with RSpec.
On the backend, we're a Rails shop (ROR) riding on AWS/GCP and Postgres RDS.
Working Days: 5 (Saturday and all Sunday’s off).
Why GoComet?
About GoComet (www.gocomet.com)
GoComet - our Logistics Resource Management (LRM) SaaS platform leverages the combined power of data science and machine intelligence. It facilitates sharp reverse auctions bringing out the best possible end to end rates for shipments, saves time, optimises operations, and increases deal transparency and efficiencies for enterprises’ freight procurement processes.
Owing to our growing impact and potential, the Singapore Government (SGInnovate) is now backing us as an investor. Also, our global customers (including Fortune 500 Conglomerates) like Schaeffler, Glenmark, Sun Pharma, Polyplex, Indorama Ventures - trust, and recommend us.
Besides, we were also recently mentioned in the Gartner Visibility Guide.
About the Role-
Thinking big and executing beyond what is expected. The challenges cut across algorithmic problem solving, systems engineering, machine learning and infrastructure at a massive scale.
Reason to Join-
An opportunity for innovators, problem solvers & learners. Working will be Innovative, empowering, rewarding & fun. Amazing Office, competitive pay along with excellent benefits package.
Requiremets and Responsibilities- (please read carefully before applying)
- The overall experience of 3-6 years in Java/Python Framework and Machine Learning.
- Develop Web Services, REST, XSD, XML technologies, Java, Python, AWS, API.
- Experience on Elastic Search or SOLR or Lucene -Search Engine, Text Mining, Indexing.
- Experience in highly scalable tools like Kafka, Spark, Aerospike, etc.
- Hands on experience in Design, Architecture, Implementation, Performance & Scalability, and Distributed Systems.
- Design, implement, and deploy highly scalable and reliable systems.
- Troubleshoot Solr indexing process and querying engine.
- Bachelors or Masters in Computer Science from Tier 1 Institutions
About the role
Checking quality is one of the most important tasks at Anakin. Our clients are pricing their products based on our data, and minor errors on our end can lead to our client's losses of millions of dollars. You would work with multiple tools and with people across various departments to ensure the accuracy of the data being crawled. You would setup manual and automated processes and make sure they run to ensure the highest possible data quality.
You are the engineer other engineers can count on. You embrace every problem with enthusiasm. You remove hurdles, are a self-starter, team player. You have the hunger to venture into unknown areas and make the system work.
Your Responsibilities would be to:
- Understand customer web scraping and data requirements; translate these into test approaches that include exploratory manual/visual testing and any additional automated tests deemed appropriate
- Take ownership of the end-to-end QA process in newly-started projects
- Draw conclusions about data quality by producing basic descriptive statistics, summaries, and visualisations
- Proactively suggest and take ownership of improvements to QA processes and methodologies by employing other technologies and tools, including but not limited to: browser add-ons, Excel add-ons, UI-based test automation tools etc.
- Ensure that project requirements are testable; work with project managers and/or clients to clarify ambiguities before QA begins
- Drive innovation and advanced validation and analytics techniques to ensure data quality for Anakin's customers
- Optimize data quality codebases and systems to monitor the Anakin family of app crawlers
- Configure and optimize the automated and manual testing and deployment systems used to check the quality of billions of data points of over 1000+ crawlers across the company
- Analyze data and bugs that require in-depth investigations
- Interface directly with external customers including managing relationships and steering requirements
Basic Qualifications:
- 2+ years of experience as a backend or a full-stack software engineer
- Web scraping experience with Python or Node.js
- 2+ years of experience with AWS services such as EC2, S3, Lambda, etc.
- Should have managed a team of software engineers
- Should be paranoid about data quality
Preferred Skills and Experience:
- Deep experience with network debugging across all OSI layers (Wireshark)
- Knowledge of networks or/and cybersecurity
- Broad understanding of the landscape of software engineering design patterns and principles
- Ability to work quickly and accurately in a highly stressful environment during removing bugs in run-time within minutes
- Excellent communicator, both written and verbal
Additional Requirements:
- Must be available to work extended hours and weekends when needed to meet critical deadlines
- Must have an aversion to politics and BS. Should let his/her work speak for him/her.
- Must be comfortable with uncertainty. In almost all the cases, your job will be to figure it out.
- Must not be bounded to comfort zone. Often, you will need to challenge yourself to go above and beyond.












.png&w=256&q=75)