

We’re building the future of private financial markets
Traditionally a space only for the wealthy and well-connected, we believe in a future where private markets are more accessible to investors and fundraisers. By leveling the playing field we hope to create a more equitable economy, where inspiring companies are connected to inspired investors, whoever and wherever they are.
Leveraging our trusted brand, global networks and incredible team, we’re building a technology-enabled ecosystem that is as diverse and dynamic as our investor network. As we progress on this ambitious journey, we’re looking for energetic and creative people to support and leave their mark on our platform.
Before Applying
- We have big plans to disrupt the traditional fundraising process for private businesses
- You will work with a diverse team of former investment bankers, strategy consultants and business owners in developing, monitoring, and improving products to facilitate the activity of private investing
- Everything we do is focused on helping build the private capital markets for the next generation of business owners and investors
- We work really hard but play really hard as well
Job purpose
- We are looking for passionate Data Scientists with strong problem-solving skills and prior experience in building machine learning models. You should possess the ability to thrive in a fast-paced environment. As a Data Scientist, working with passionate data-driven enthusiasts, you will lead the deployment of decision sciences with advanced analytics as well as machine learning and AI capabilities to support various lines of businesses. You will also help to enable a data driven culture within the organization.
Roles and responsibilities
- Work with other Data Scientists, Data Engineers, Data Analysts, Software engineers to build and manage data products
- Work on cross-functional projects using advanced data modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities.
- Develop an enterprise data science strategy to achieve scale, synergies, and sustainability of model deployment
- Undertake rigorous analyses of business problems on structured and unstructured data with advanced quantitative techniques.
- Apply your expertise in data science, statistical analysis, data mining and the visualisation of data to derive insights that value-add to business decision making (e.g. hypothesis testing, development of MVPs, prototyping etc).
- Manage and optimize processes for data intake, validation, mining, and engineering as well as modeling, visualization and communication deliverable.
You’ll be a great fit for us if you
- Bachelor or Master’s in Computer Science, Statistics, Mathematics, Economics, or any other related fields
- At least 3 to 5 years of hands-on experience in a Data Science role with exposure and proficiency in quantitative and statistical analysis, predictive analytics,multi-variate testing and algorithm-optimization for machine learning
- Deep expertise in a range of ML concepts, frameworks and techniques such as logistic regression, clustering, dimensional reduction, recommendation systems,neural nets etc.
- Strong understanding of data infrastructure technologies (e.g. Spark, TensorFlow etc).
- Familiarity with data engineering methodologies, including SQL, ETL and experience in manipulating data sets with structured and unstructured data using Hadoop, AWS or other big data platforms.
- Highly proficient in data visualization and the use of dash boarding tools (e.g.Tableau, Matplotlib, plot.ly etc).
- Proven track record in delivering bespoke data science solutions in a cross-functional setting.
- Experience in managing a small team is preferred.
Bonus attributes
- Interested in dealing with data, including finding, and exploring more efficient ways/programs (e.g. machine learning) to collect, store, and analyse data
- Preferably have some understanding of terms in financial statements and financial ratios
- Strong problem-solving skills – able to find various ways to solve problems and decide which solution to move forward
- Ability to work under pressure and tight timings
- Team oriented, but highly independent for their own projects
- High level of organisational skills and ability to prioritize

About Fundnel Pte Ltd
Similar jobs

Responsibilities:
• Build customer facing solution for Data Observability product to monitor Data Pipelines
• Work on POCs to build new data pipeline monitoring capabilities.
• Building next-generation scalable, reliable, flexible, high-performance data pipeline capabilities for ingestion of data from multiple sources containing complex dataset.
•Continuously improve services you own, making them more performant, and utilising resources in the most optimised way.
• Collaborate closely with engineering, data science team and product team to propose an optimal solution for a given problem statement
• Working closely with DevOps team on performance monitoring and MLOps
Required Skills:
• 3+ Years of Data related technology experience.
• Good understanding of distributed computing principles
• Experience in Apache Spark
• Hands on programming with Python
• Knowledge of Hadoop v2, Map Reduce, HDFS
• Experience with building stream-processing systems, using technologies such as Apache Storm, Spark-Streaming or Flink
• Experience with messaging systems, such as Kafka or RabbitMQ
• Good understanding of Big Data querying tools, such as Hive
• Experience with integration of data from multiple data sources
• Good understanding of SQL queries, joins, stored procedures, relational schemas
• Experience with NoSQL databases, such as HBase, Cassandra/Scylla, MongoDB
• Knowledge of ETL techniques and frameworks
• Performance tuning of Spark Jobs
• General understanding of Data Quality is a plus point
• Experience on Databricks,snowflake and BigQuery or similar lake houses would be a big plus
• Nice to have some knowledge in DevOps
Job Title: Data Engineer (Python, AWS, ETL)
Experience: 6+ years
Location: PAN India (Remote / Work From Home)
Employment Type: Full-time
Preferred domain: Real Estate
Key Responsibilities:
Develop and optimize ETL workflows using Python, Pandas, and PySpark.
Design and implement SQL queries for data extraction, transformation, and optimization.
Work with JSON and REST APIs for data integration and automation.
Manage and optimize Amazon S3 storage, including partitioning and lifecycle policies.
Utilize AWS Athena for SQL-based querying, performance tuning, and cost optimization.
Develop and maintain AWS Lambda functions for serverless processing.
Manage databases using Amazon RDS and Amazon DynamoDB, ensuring performance and scalability.
Orchestrate workflows with AWS Step Functions for efficient automation.
Implement Infrastructure as Code (IaC) using AWS CloudFormation for resource provisioning.
Set up AWS Data Pipelines for CI/CD deployment of data workflows.
Required Skills:
Programming & Scripting: Python (ETL, Automation, REST API Integration).
Databases: SQL (Athena / RDS), Query Optimization, Schema Design.
Big Data & Processing: Pandas, PySpark (Data Transformation, Aggregation).
Cloud & Storage: AWS (S3, Athena, RDS, DynamoDB, Step Functions, CloudFormation, Lambda, Data Pipelines).
Good to Have Skills:
Experience with Azure services such as Table Storage, AI Search, Cognitive Services, Functions, Service Bus, and Storage.
Qualifications:
Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field.
6+ years of experience in data engineering, ETL, and cloud-based data processing.

JD :
React. Js Developer
Skill Sets: HTML, CSS, JS , Typescript, Nextjs
Location : Bangalore , Complete WFO
Address:
Embassy Tech Village Rd, Devarabisanahalli, Bellandur, Bengaluru, Karnataka 560103, India.
Candidate Persona:
Need Product based Background Folks only or someone who has worked on product projects.
Resume is extremely crucial here, if roles and responsibilities, Education details and company names are not mentioned properly in utmost details then they will not be considered further.
Communication skills- Good
Btech graduated ONLY
Skills that should be mentioned in the CV - HTML, css, js, typescript, Data structures and algorithm.
Candidates should have a linkedin ID.
Candidates should have some leet code or hackerrank links (preferred)
Note : candidates who are ready for all the rounds can apply.


Expertise in developing SAPUI5 applications using ADT(Eclipse)/Web IDE, JQuery, Java Script,XML,HTML5 & CSS3 And consuming the data by using Net weaver gateway services
• Expertise in developing NetWeaver gateway services(SEGW)
• Expertise in Integrating Gateway OData/JSON services with UI5 Apps, System Landscape, Architecture, SSO, Security Set Up and Gateway Configuration
• Fiori Package Configuration on SAP Gateway & Backend ECC App Configuration
• Customization & Extension of Fiori Apps
• Drive the System Landscape, Architecture, Security, Users Set Up
• Configure Gateway & Fiori Infrastructure (with support from Basis Team)
• Fiori App Configuration ( SAP Gateway & S/4HANA System)
• Work with Functional and Technical Resources in Setting up the S/4HANA Backend
• Set User Roles, Profiles, Apply Notes & Fixes in ABAP Systems
• Roll out Apps to Business Users for Testing
• Expertise in coding for consuming data for UI5 Apps through Netweaver Gateway OData/JSON services.
• Expertise in extending OData service/OData models
• Expertise in customizing/Extending the standard Fiori Apps
• Expertise in creating HANA Schemas, Packages, Attribute Views, Analytical Views, Calculation Views with HANA LIVE, Attribute mapping, creation of calculated measures, Calculated attributes, Input parameters, Variables, Restricted measures, Constant columns, Currency Unit conversion, stored procedures and maintaining analytical privileges, and involved in Data Provisioning using SLT and BODS(data services) meta data import. Involved in Export/Import of models, validations.
• Expertise in using AMDP, CDS views, ADBC, External views
Roles and Responsibilities
Design and implement email marketing campaigns for lead generation and branding.
Proofread emails for clarity, grammar, and spelling.
Identify target audience and grow our email list.
Ensure mobile-friendly email templates.
Analyze campaign performance and suggest improvements.
Report on sales revenue generated from email marketing efforts.
Conceptualizing marketing campaigns that speak directly to the pain points of existing and prospective clientele.
Collaborating with graphic designers to improve the appearance layout of outputs.
Maintaining a database of customers who have opted to receive our correspondence.
Should know email marketing tools and email marketing automation.

Should be open to embracing new technologies, keeping up with emerging tech.
Strong troubleshooting and problem-solving skills.
Willing to be part of a high-performance team, build mature products.
Should be able to take ownership and work under minimal supervision.
Strong Linux System Administration background (with minimum 2 years experience), responsible for handling/defining the organization infrastructure(Hybrid).
Working knowledge of MySQL databases, Nginx, and Haproxy Load Balancer.
Experience in CI/CD pipelines, Configuration Management (Ansible/Saltstack) & Cloud Technologies (AWS/Azure/GCP)
Hands-on experience in GitHub, Jenkins, Prometheus, Grafana, Nagios, and Open Sources tools.
Strong Shell & Python scripting would be a plus.
Role: RPA Analyst
Company: KOCH (https://www.kochind.com/" target="_blank">https://www.kochind.com)
Type: Permanent (Direct payroll)
Edu: Any Full time Graduates
Exp : 6+
Job Location: Kundalahalli,Near Brookefield Hospital, Bangalore -560037
-
5+ Years of IT experience
-
3+ Years of development experience with either Automation Anywhere
-
Knowledge on UI path
-
Minimum of 3 years hand on experience working with Automation anywhere 10.7 or higher.
-
Should have knowledge on 11.x.
-
Proficient in automating complex business requirements and proactively propose alternate solution.
-
Designing and development of bots/Metabot to enhance the functionality.
-
Must have worked on automating process that involve ERP systems, web sytems.
-
Proficient in understanding the requiement and propose the relevant RPA tool (Automation anywhere and UI path)
-
Should be open to learn new technogies and share the workload of team members.
-
Bachelor’s/Master’s degree in Computer Science/Information Technology or a related field
-
Strong verbal and written communication
-
Strong analytical and problem-solving skill
-
Relationship management

About the Role
Dremio’s user experience is one of its key differentiators and makes all your data easily accessible and shareable by your data consumers. UI Engineers at Dremio are responsible for the development of the user interface and user experience on Dremio’s Data Lake Engine.
Responsibilities and ownership
- Own the full cycle of development of our modern single page web application from inception, design, development, testing, and production.
- Care deeply about modular design patterns and frameworks to deliver an architecture that’s rooted in simplicity, that’s easy to iterate on and constantly evolve.
- Passionate about ease of use, experience and quality of the product.
Requirements
- 5+ years of experience working with JavaScript frameworks such as React, Angular.js, Angular, or Vue.js.
- 2 years minimum experience with React is highly preferred and currently utilizing React in their current job.
- Strong coding experience in JavaScript (or TypeScript), HTML, and CSS.
- Passion about UI development and UX design
- Shown proven success in delivering high-quality front end applications
- Fluency in the understanding of SQL and databases (relational or non-relational)
- B.S. or M.S in Computer Science in a relevant technical field or equivalent professional experience

