

Job Title: IT and Cybersecurity Network Backend Engineer (Remote)
Job Summary:
Join Springer Capital’s elite tech team to architect and fortify our digital infrastructure, ensuring robust, secure, and scalable backend systems that power cutting‑edge investment solutions.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm that redefines financial strategies through innovative digital solutions. We identify high-potential opportunities and leverage advanced technology to drive value, transforming traditional investment paradigms. Our culture is built on agility, creative problem-solving, and a relentless pursuit of excellence.
Job Highlights:
As an IT and Cybersecurity Network Backend Engineer, you will play a central role in designing, developing, and securing our backend systems. You’ll be responsible for creating bulletproof server architectures and integrating sophisticated cybersecurity measures to ensure our digital assets remain secure, reliable, and scalable—all while working fully remotely.
Responsibilities:
- Backend Architecture & Security:
- Design, develop, and maintain high-performance backend systems and RESTful APIs using technologies such as Python, Node.js, or Java.
- Implement advanced cybersecurity protocols including encryption, multi-factor authentication, and anomaly detection to safeguard our infrastructure.
- Network Infrastructure Management:
- Architect secure cloud and hybrid network solutions to protect sensitive data and ensure uninterrupted service.
- Develop robust logging, monitoring, and compliance mechanisms.
- Collaborative Innovation:
- Partner with cross-functional teams (DevOps, frontend, and product managers) to integrate security seamlessly into every layer of our technology stack.
- Participate in regular security audits, agile sprints, and technical reviews.
- Continuous Improvement:
- Keep abreast of emerging technologies and cybersecurity threats, proposing and implementing innovative solutions to maintain system integrity.
What We Offer:
- Advanced Learning & Mentorship: Work side-by-side with industry experts who will empower you to push the boundaries of cybersecurity and backend engineering.
- Impactful Work: Engage in projects that directly influence the security and scalability of our revolutionary digital investment strategies.
- Dynamic, Remote Culture: Thrive in a flexible, remote-first environment that champions creativity, collaboration, and work-life balance.
- Career Growth: Unlock long-term career advancement opportunities in a forward-thinking organization that values innovation and initiative.
Requirements:
- Degree (or current enrollment) in Computer Science, Cybersecurity, or a related field.
- Proficiency in at least one backend programming language (Python, Node.js, or Java) and hands-on experience with RESTful API design.
- Solid understanding of network security principles and experience implementing cybersecurity best practices.
- Passionate about designing secure systems, solving complex technical challenges, and staying ahead of industry trends.
- Strong analytical and communication skills, with the ability to work effectively in a collaborative, fast-paced environment.
About Springer Capital:
At Springer Capital, we blend financial expertise with digital innovation to shape tomorrow’s investment landscape. Our relentless drive to merge technology and asset management has positioned us as leaders in transforming traditional finance into dynamic, tech-enabled ventures.
Location: Global (Remote)
Job Type: Full-time
Pay: $50 USD per month
Work Location: Remote
Embark on your next challenge with Springer Capital—where your technical prowess and dedication to security help safeguard the future of digital investments.

About Springer Capital
About
Company social profiles
Similar jobs

Developing core infrastructure in Python, Django.
- Developing models and business logic (e. g. transactions, payments, diet plan, search, etc).
- Architecting servers and services that enable new product features.
- Building out newly enabled product features.
- Monitoring system uptime and errors to drive us toward a high-performing and reliable product.
- Take ownership and understand the need for code quality, elegance, and robust infrastructure.
- Worked collaboratively on a software development team.
- Built scalable web applications.
Skills:
- Minimum 4 years of industry or open-source experience.
- Proficient in at least one OO language: Python(preferred)/Golang/Java.
- Writing high-performance, reliable and maintainable code.
- Good knowledge of database structures, theories, principles, and practices.
- Experience working with AWS components [EC2, S3, RDS, SQS, ECS, Lambda]
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus


Requirements:
- Expert in Python, with good knowledge in Django technology.
- Able to integrate multiple data sources and databases into one system.
- Understanding of the threading limitations of Python, and multi-process architecture.
- Knowledge of user authentication and authorization between multiple systems, servers, and environments.
- Experience in Flask will be an add-on
- Understanding of fundamental design principles behind a scalable application.
- Familiarity with event-driven programming in Python.
- Able to create database schemas that represent and support business processes.
- Proficient understanding of code versioning tools.
- Good communication skills.

Eximius is a micro-VC fund which invests into early-stage startups. One of our portfolio startup iTribe is looking for "Hustlers", "First Principle Thinkers", "Crazy and trips on creating products out of the box from scratch", "Outspoken", "unafraid of users ripping your product".
About iTribe:
iTribe is on a mission to make discussing finance from being a taboo topic to a dinner table topic, which means making "Bharat" financially literate and independent. iTribe is a social network which allows anyone to discover, discuss, vet ideas and seek advice from like-minded iTribe aims at making Finance simple, witty and fun for everyone
Founded in 2021 by IIT Kharagpur alumni to convert their own journey of becoming investors into a product to help billion users. An emerging startup backed by some of the most respected investors around the world.
Responsibilities
- Craft clean, manageable code, maintain proper documentation and code integrity
- Maintain quality and ensure 100% uptime
- Create, test and deploy the applications on production servers
- Continuously discover, evaluate, and implement new technologies and frameworks to maximize development efficiency
- Responsible for security and data protection.
- Working alongside Product managers to architect and design new features.
- Collaborate with the rest of the engineering team to launch new features.
- Unblock peers and keep all the internal and external teams informed about the progress of development
Requirements
- 2 - 5 years of experience in designing and developing Server side component (REST APIs, Micro service Architecture)
- Good understanding and strong in data structures and algorithms (Degree in computer science is an added advantage)
- OOP implementation experience with back-end programming languages (Ex: JavaScript, Python, Java, etc).
- Good to have experience with event driven architecture.
- Good understanding of databases such as MongoDB, PostgreSQL, MySQL.
- Well-versed with Software Development Life Cycle
- Demonstrate the ability to be a self-starter, learn and implement new technologies/ frameworks
- Excellent analytical and problem-solving skills
Why you should join iTribe?
- Building a product which is not done in decades.
- Be a part of the founding team and work directly with the founders.
- Competitive Salary
- ESOPs - ownership in the company
- Medical Insurances
- Subscriptions to premium platforms for learning
- Books that you ask for
- Friendly leave policy

Job Description
- Design & implement backend APIs
- Mentor junior developers technically.
- Actively work to reduce tech debt in the backend
- Work towards more stability & scalability of the backend
- Tech stack - Java, AWS, Aurora etc.
Eligibility
- 4-8 years of product company experience
- OOP implementation experience. Programming language does not matter. We use Java internally but have hired folks from non Java background.
- Hands on experience in SQL, Dynamo DB, Postgres etc preferred.
- Prior experience building REST APIs
- Advanced understanding of AWS stack
- Prior knowledge of solving problems at scale.

Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team








