Making templets and graphics for interactive marketing
Making great looking dashboards
Designing A3 size sample forms

About eparchi
About
Connect with the team
Company social profiles
Similar jobs
About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT’. Embracing Software Craftsmanship values and eXtreme Programming Practices, we create well-crafted products for our clients. We partner with large organizations to help modernize their legacy code bases and work with startups to launch MVPs, scale or as extensions of their team to efficiently operationalize their ideas. We love to work with folks who are passionate about creating exceptional software, are continuous learners, and are painstakingly fussy about quality. 🚀
Location: Remote
Our Core Values
· Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
· Extreme Ownership: We own our work and its outcomes fully.
· Proactive Collaboration: Teamwork elevates us all.
· Pursuit of Mastery: Continuous growth drives us.
· Effective Feedback: Honest, constructive feedback fosters improvement.
· Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 7+ years of hands-on software development experience, particularly in Ruby on Rails at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
What We're Looking For
- Expertise in Ruby on Rails, Test Driven Development, React, React.js or JavaScript, and TypeScript
- Strong skills in object-oriented programming, data structures, algorithms, and software engineering methodologies
- Ability to design and develop web architecture and optimize existing infrastructure
- Experience working in Agile and eXtreme Programming methodologies within a continuous deployment environment
- Interest in mastering technologies like web server ecosystems, relational DBMS, TDD, CI/CD tools
- Knowledge of server configuration and deployment infrastructure
- Experience using source control, bug tracking systems, writing user stories, and technical documentation
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
.
Senior Data (Platform) Engineer
Location: Hyderabad | Department: Technology, Data
About the Role
Are you passionate about building reliable, scalable data platforms that make analytics and AI development easier? As a Senior Data Platform Engineer, you will be hands-on in building, operating, and improving our core data platform and AI/LLM enablement tooling.
You’ll focus on infrastructure, orchestration, CI/CD, and reusable frameworks that support analytics engineering and AI-driven use cases. You’ll work closely with Analytics Engineering and Insights teams and support other departments as they integrate with our data systems.
What You'll Do
Data Platform & Infrastructure
- Build, deploy, and operate cloud infrastructure for data and AI workloads using Infrastructure as Code (Terraform).
- Provision and manage cloud resources across development, staging, and production environments.
- Develop and maintain CI/CD pipelines for data transformations, orchestration workflows, and platform services.
- Operate and scale containerized workloads on Kubernetes, including Airflow, internal APIs, and AI/LLM services.
- Troubleshoot and resolve infrastructure, pipeline, and orchestration failures to ensure platform reliability.
- Maintain and support existing ML services and pipelines to ensure stability and reliability (No expectation to design or develop new ML models or training pipelines).
- Continuously monitor and optimize platform performance and cost.
Framework, Tooling and Enablement
- Build and maintain reusable frameworks and patterns for dbt, Airflow, Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.), Internal data and AI APIs
- Build and support infrastructure and pipelines for AI/LLM-based use cases, including orchestration, integration, and serving.
- Improve developer experience for Analytics Engineering and Insights teams by reducing friction in local development, deployments, and production workflows.
- Create and maintain technical documentation and examples to support self-service analytics and data development.
What You’ll Need
Technical Skills & Experience
- 5+ years of experience in data engineering, platform engineering, or similar hands-on roles.
- Strong programming skills in Python and SQL.
- Hands-on experience with:
- Terraform
- Airflow
- dbt
- Kubernetes
- Cloud platforms (AWS, Google Cloud, or Microsoft Azure)
- CI/CD pipelines (GitHub Actions, GitLab CI, CircleCI, etc.)
- Cloud data warehouses (Snowflake, BigQuery, Redshift, Databricks, etc.)
- Strong understanding of analytical data models and how analytics teams consume data.
- Experience integrating and operating LLM-based pipelines and services (not model training).
Soft Skills & Collaboration
- Strong problem-solving skills and ability to debug complex platform issues.
- Strong preference for declarative development, with the ability to clearly separate what a system should do from how it is implemented.
- Clear communicator who can work effectively with both technical and non-technical stakeholders.
- Pragmatic, ownership-driven mindset with a focus on reliability and simplicity.
Why Join Us?
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
Job Description:
Look for candidates who are preferably immediate joiners and have stability in career
Skills - DS Algorithms, Oops Concept,
Job Brief-
· Understand product requirements and come up with solution approaches
· Build and enhance large scale domain centric applications
· Deploy high quality deliverables into production adhering to the security, compliance and SDLC guidelines
Business analyst, Business analysis, Product owner, Requirement gathering, Agile, FRD, BRD
Qualified Business Analyst expertise in Data Analytics & AI, Digital Banking. Core domain is Wholesale and retail Banking.
Primarily we look for below following criteria :
- Good requirement gathering / BA skills
- Experience of Agile framework
- Domain – BFS : Wholesale / Retail etc.
- Data Analytics & AI, Digital Banking experience
Preferably IMMEDIATE JOINERS.
c#.net,ASP.NET,ADO.NET
c#.net,ASP.NET,ADO.NET
c#.net,ASP.NET,ADO.NET
c#.net,ASP.NET,ADO.NET
Role/ Responsibilities
- Design APIs, DB, Queues, monitoring for micro services.
- Writing, deploying and managing micro services.
- Migrate existing components into distributed micro service architecture.
- AWS Cloud / Google Cloud Platform (Manage Infrastructure).
- API integration with 3rd parties.
- Unit test cases, automation.
- Database optimisation.
- Design of highly concurrent backend architecture.
- Handling high traffic data.
Experience required:
- Sound fundamentails in software design.
- Must have worked on distributed and micro service architecture.
- Sound fundamentals on scale/ performance/ memory optimisation.
- Sound fundamentals of authentication, authorization, payment processes, data security.
- Must have experience in Spring / Spring boot.
- Good to have experience in Kafka / JMS / RabbitMQ / AWS Elastic queue.
- Good to have experience in Junit / mockito unit test cases.
- Good to have knowledge in Mysql (or any RDBMS).
* Excellent selling, communication & negotiation skills.
* Time management and organizational skills.
* Ability to create and deliver presentations tailored to audience needs.
* Relationship management skills and openess to feedback.
Understand various raw data input formats, build consumers on Kafka/ksqldb for them and ingest large amounts of raw data into Flink and Spark.
Conduct complex data analysis and report on results.
Build various aggregation streams for data and convert raw data into various logical processing streams.
Build algorithms to integrate multiple sources of data and create a unified data model from all the sources.
Build a unified data model on both SQL and NO-SQL databases to act as data sink.
Communicate the designs effectively with the fullstack engineering team for development.
Explore machine learning models that can be fitted on top of the data pipelines.
Mandatory Qualifications Skills:
Deep knowledge of Scala and Java programming languages is mandatory
Strong background in streaming data frameworks (Apache Flink, Apache Spark) is mandatory
Good understanding and hands on skills on streaming messaging platforms such as Kafka
Familiarity with R, C and Python is an asset
Analytical mind and business acumen with strong math skills (e.g. statistics, algebra)
Problem-solving aptitude
Excellent communication and presentation skills
- Expert in Python. Comfortable with Web frameworks, such as Flask or Django
- Familiarity with Object Relational Mapping Libraries and ability to Integrate with Multiple Data Sources into One System
- Understanding of Limitations of Python and Multi Process Architecture. Understanding of Design Principles of Scalable Application
- Has familiarity with data frameworks in Python - Pandas or
- Good object-oriented design skills and knowledge of design
- Knowledge of key-value stores, caching, search, messaging queues
- Minimum 5 years of experience in the above










