UnFound is an intelligent news aggregator built to provide users with multiple perspectives to news stories in addition to contextual information. UnFound does almost everything via AI algorithms, with minimal reliance on human editors.
UnFound is born to destroy the misinformation epidemic. The one true source, the alpha & the omega of all your news needs.
But we're not journalists- we're Product Managers, Engineers, Machine Learning practitioners, & Media/News enthusiasts. We aggregate information, add AI, & boom. We want to empower users to steer through the bias of the news and consume it more critically, more rationally. We're here to invite you to join us.
- Sound knowledge of Mongo as a primary skill
- . Should have hands on experience of MySQL as a secondary skill will be enough
- . Experience with replication , sharding and scaling.
- . Design, install, maintain highly available systems (includes monitoring, security, backup, and performance tuning)
- . Implement secure database and server installations (privilege access methodology / role based access)
- . Help application team in query writing, performance tuning & other D2D issues
- • Deploy automation techniques for d2d operations
- . Must possess good analytical and problem solving skills
- . Must be willing to work flexible hours as needed
- . Scripting experience a plus
- . Ability to work independently and as a member of a team
- . good verbal and written communication skills
· 3+ years of relevant technical experience as a data analyst role
· Intermediate / expert skills with SQL and basic statistics
· Experience in Advance SQL
· Python programming- Added advantage
· Strong problem solving and structuring skills
· Automation in connecting various sources to the data and representing it through various dashboards
· Excellent with Numbers and communicate data points through various reports/templates
· Ability to communicate effectively internally and outside Data Analytics team
· Proactively take up work responsibilities and take adhocs as and when needed
· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
· Strong technical communication skills; both written and verbal
· Ability to understand and articulate the "big picture" and simplify complex ideas
· Ability to identify and learn applicable new techniques independently as needed
· Must have worked with various Databases (Relational and Non-Relational) and ETL processes
· Must have experience in handling large volume and data and adhere to optimization and performance standards
· Should have the ability to analyse and provide relationship views of the data from different angles
· Must have excellent Communication skills (written and oral).
· Knowing Data Science is an added advantage
MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills
· Strong understanding of relational database MY SQL etc.
· Prior experience working remotely full-time
· Prior Experience working in Advance SQL
· Experience with one or more BI tools, such as Superset, Tableau etc.
· High level of logical and mathematical ability in Problem Solving
We are looking for a Quantitative Developer who is passionate about financial markets and wants to join a scale-up with an excellent track record and growth potential in an innovative and fast-growing industry.
As a Quantitative Developer, you will be working on the infrastructure of our platform,as part of a very ambitious team.
At QCAlpha you have the freedom to choose the path that leads to the solution and get a lot of responsibility.
• Design, develop, test, and deploy elegant software solutions for automated trading systems
• Building high-performance, bullet-proof components for both live trading and simulation
• Responsible for technology infrastructure systems development, which includes connectivity, maintenance, and internal automation processes
• Achieving trading system robustness through automated reconciliation and system-wide alerts
• Bachelor’s degree or higher in computer science or other quantitative discipline
• Strong fundamental knowledge of OOP programming, algorithms, data structures and design patterns.
• Familiar with the following technology stacks: Linux shell, Python and its ecosystem, NumPy, Pandas, SQL, Redis, Docker or similar system
• Experience in python frameworks such as Django or Flask.
• Solid understanding of git, ci/cd.
• Excellent design, debugging and problem-solving skills.
• Proven versatility and ability to pick up new technologies and learn systems quickly.
• Trading Execution development and support experience is a plus.
• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills
Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel
• Customer Office (Mumbai) / Remote Work
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
- Develop process workflows for data preparations, modeling, and mining Manage configurations to build reliable datasets for analysis Troubleshooting services, system bottlenecks, and application integration.
- Designing, integrating, and documenting technical components, and dependencies of big data platform Ensuring best practices that can be adopted in the Big Data stack and shared across teams.
- Design and Development of Data pipeline on AWS Cloud
- Data Pipeline development using Pyspark, AWS, and Python.
- Developing Pyspark streaming applications
- Hands-on experience in Spark, Python, and Cloud
- Highly analytical and data-oriented
- Good to have - Databricks
We are looking for an exceptional Software Developer for our Data Engineering India team who can-
contribute to building a world-class big data engineering stack that will be used to fuel us
Analytics and Machine Learning products. This person will be contributing to the architecture,
operation, and enhancement of:
Our petabyte-scale data platform with a key focus on finding solutions that can support
Analytics and Machine Learning product roadmap. Everyday terabytes of ingested data
need to be processed and made available for querying and insights extraction for
various use cases.
About the Organisation:
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom, and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Software Developer, Data Engineering team
Location: Pune(Initially 100% Remote due to Covid 19 for coming 1 year)
- Our bespoke Machine Learning pipelines. This will also provide opportunities to
contribute to the prototyping, building, and deployment of Machine Learning models.
- Have at least 4+ years’ Experience.
- Deep technical understanding of Java or Golang.
- Production experience with Python is a big plus, extremely valuable supporting skill for
- Exposure to modern Big Data tech: Cassandra/Scylla, Kafka, Ceph, the Hadoop Stack,
Spark, Flume, Hive, Druid etc… while at the same time understanding that certain
problems may require completely novel solutions.
- Exposure to one or more modern ML tech stacks: Spark ML-Lib, TensorFlow, Keras,
GCP ML Stack, AWS Sagemaker - is a plus.
- Experience includes working in Agile/Lean model
- Experience with supporting and troubleshooting large systems
- Exposure to configuration management tools such as Ansible or Salt
- Exposure to IAAS platforms such as AWS, GCP, Azure…
- Good addition - Experience working with large-scale data
- Good addition - Good to have experience architecting, developing, and operating data
warehouses, big data analytics platforms, and high velocity data pipelines
**** Not looking for a Big Data Developer / Hadoop Developer
- Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
- Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
- Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
- Periodic Database health check and maintenance
- Designing collections in a no-SQL Database for efficient performance
- Document & maintain data dictionary from various sources to enable data governance
- Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
- Data Governance Process Implementation and ensuring data security
- Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
- Programming experience using Python / Java.
- Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
- Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
- Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
- Extensive technical experience in SQL including code optimization techniques.
- Strung knowledge of database performance and tuning, troubleshooting, and tuning.
- Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
- Ability to understand business functionality, processes, and flows.
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
- Any OLAP DWH DBA Experience and User Management will be added advantage.
- Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
- Experience in Snowflake will be added advantage.
- Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.
- Data Governance & Quality Assurance
- Modern OLAP Database Architecture & Design
- Data structures, algorithm & data modeling techniques
- No-SQL database architecture
- Data Security
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
Company Profile and Job Description
AthenasOwl (AO) is our “AI for Media” solution that helps content creators and broadcasters to create and curate smarter content. We launched the product in 2017 as an AI-powered suite meant for the media and entertainment industry. Clients use AthenaOwl's context adapted technology for redesigning content, taking better targeting decisions, automating hours of post-production work and monetizing massive content libraries.
For more details visit: www.athenasowl.tv
Senior Machine Learning Engineer
4 -6 Years of experience
Mumbai (Malad W)
- Develop cutting edge machine learning solutions at scale to solve computer vision problems in the domain of media, entertainment and sports
- Collaborate with media houses and broadcasters across the globe to solve niche problems in the field of post-production, archiving and viewership
- Manage a team of highly motivated engineers to deliver high-impact solutions quickly and at scale
The ideal candidate should have:
- Strong programming skills in any one or more programming languages like Python and C/C++
- Sound fundamentals of data structures, algorithms and object-oriented programming
- Hands-on experience with any one popular deep learning framework like TensorFlow, PyTorch, etc.
- Experience in implementing Deep Learning Solutions (Computer Vision, NLP etc.)
- Ability to quickly learn and communicate the latest findings in AI research
- Creative thinking for leveraging machine learning to build end-to-end intelligent software systems
- A pleasantly forceful personality and charismatic communication style
- Someone who will raise the average effectiveness of the team and has demonstrated exceptional abilities in some area of their life. In short, we are looking for a “Difference Maker”