![Alphanumeriq.ai's logo](https://cdn.cutshort.io/public/images/default_company_picture.jpg)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/python.png)
We are looking for a Quant Analyst Intern for our Bangalore office. This role will be responsible for aspects of our trading infrastructure that support our trading operations and feed our research.
You will collaborate with experienced colleagues to build new technology. Your work will integrate the best new network and compute technologies with our internally developed hardware and software to operate our industry leading trading system architecture.
A Quant Analyst Intern might work on projects in areas such as: market access, monitoring and compliance, algo/strategy evaluation and management, performance tuning, and trading automation. You will be working on a tight knit, highly productive team. You'll be charged with designing new systems that impact the whole company, debugging mission critical software in high pressure fast paced environments, This group churns through large amounts of code daily. As a Trading Algorithm Developer, you'll be expected to establish a deep knowledge of Python, SQL, Excel, Power BI, machine learning (deep learning in particular), Quantitative Finance , and algorithm design.
Responsibilities:
• Perform quantitative analysis on large datasets to identify trends, patterns, and insights.
• Utilize statistical techniques and modeling to extract meaningful information from data
• Assist in the development and validation of quantitative models for various purposes such as risk assessment, pricing, and forecasting
• Collaborate with senior analysts and researchers to refine and improve existing models
• Prepare reports and presentations summarizing analysis findings and insights
• Conduct testing and validation of models to ensure accuracy, reliability, and compliance with regulatory standards
• Excellent problem-solving skills with the ability to approach complex problems in a logical and systematic manner
• Ability to effectively communicate technical concepts to both technical and non technical audiences
• Communicate results to team members and stakeholders in a clear and concise manner
• Strong attention to detail and accuracy in analysis and documentation • Ability to identify and correct errors in data and analysis results
• Ability to work effectively in a team environment, collaborating with colleagues from diverse backgrounds and disciplines
Requirements:
• 0-6 month experience working with Python in a professional production environment
• Knowledge of Analytics tools - SQL , Excel , Power BI
• In depth knowledge of Python Libraries - Pandas , Numpy, Plotly etc
• Familiarity with IDE like Jupyter Notebook and VS Code
• Highly collaborative in nature, with excellent written and verbal communication skills
• Familiarity of AWS, Linux, Devops is strongly preferred, but not required
• Familiarity with the trading domain and finance is strongly preferred, but not required
• Proven ability to communicate with less technical users to understand requirements and design and implement appropriate solutions
• Bachelor’s degree in Computer Science, Data Science, Math or equivalent
• High level of ownership and accountability, reliability, and strong follow
![companies logos](https://cdn.cutshort.io/public/images/hiring_companies_logos-v2.webp)
About Alphanumeriq.ai
AlphaNumeriq.ai is a diversified trading firm with experience bringing sophisticated technology and exceptional people together to operate in markets around the world. We value autonomy and the ability to quickly pivot to capture opportunities, so we operate using our own capital and trading at our own risk.
Headquartered in UAE with offices throughout the APAC region, we trade a variety of asset classes including Equities, FX, Commodities and Energy across all major global markets. We have also leveraged our expertise and technology to expand into crypto assets.
We operate with respect, curiosity, and open minds. The people who thrive here share our belief that it's not just what we do that matters–it's how we do it.AlphaNumeriq.ai is a place of high expectations, integrity, innovation, and a willingness to challenge consensus.
We are a Quantitative Algorithmic trading firm that deals in US and Indian equities, Derivatives, Currencies, Metals, Energies and Commodities markets.
Similar jobs
Experience with various stream processing and batch processing tools (Kafka,
Spark etc). Programming with Python.
● Experience with relational and non-relational databases.
● Fairly good understanding of AWS (or any equivalent).
Key Responsibilities
● Design new systems and redesign existing systems to work at scale.
● Care about things like fault tolerance, durability, backups and recovery,
performance, maintainability, code simplicity etc.
● Lead a team of software engineers and help create an environment of ownership
and learning.
● Introduce best practices of software development and ensure their adoption
across the team.
● Help set and maintain coding standards for the team.
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/python.png)
Job Description
Mandatory Requirements
-
Experience in AWS Glue
-
Experience in Apache Parquet
-
Proficient in AWS S3 and data lake
-
Knowledge of Snowflake
-
Understanding of file-based ingestion best practices.
-
Scripting language - Python & pyspark
CORE RESPONSIBILITIES
-
Create and manage cloud resources in AWS
-
Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
-
Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
-
Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
-
Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
-
Define process improvement opportunities to optimize data collection, insights and displays.
-
Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
-
Identify and interpret trends and patterns from complex data sets
-
Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
-
Key participant in regular Scrum ceremonies with the agile teams
-
Proficient at developing queries, writing reports and presenting findings
-
Mentor junior members and bring best industry practices.
QUALIFICATIONS
-
5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
-
Strong background in math, statistics, computer science, data science or related discipline
-
Advanced knowledge one of language: Java, Scala, Python, C#
-
Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
-
Proficient with
-
Data mining/programming tools (e.g. SAS, SQL, R, Python)
-
Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
-
Data visualization (e.g. Tableau, Looker, MicroStrategy)
-
Comfortable learning about and deploying new technologies and tools.
-
Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
-
Good written and oral communication skills and ability to present results to non-technical audiences
-
Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
-
AWS certification
-
Spark Streaming
-
Kafka Streaming / Kafka Connect
-
ELK Stack
-
Cassandra / MongoDB
-
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/machine_learning.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/data_science.png)
Senior Data Scientist
Your goal: To improve the education process and improve the student experience through data.
The organization: Data Science for Learning Services Data Science and Machine Learning are core to Chegg. As a Student Hub, we want to ensure that students discover the full breadth of learning solutions we have to offer to get full value on their learning time with us. To create the most relevant and engaging interactions, we are solving a multitude of machine learning problems so that we can better model student behavior, link various types of content, optimize workflows, and provide a personalized experience.
The Role: Senior Data Scientist
As a Senior Data Scientist, you will focus on conducting research and development in NLP and ML. You will be responsible for writing production-quality code for data product solutions at Chegg. You will lead in identification and implementation of key projects to process data and knowledge discovery.
Responsibilities:
• Translate product requirements into AIML/NLP solutions
• Be able to think out of the box and be able to design novel solutions for the problem at hand
• Write production-quality code
• Be able to design data and annotation collection strategies
• Identify key evaluation metrics and release requirements for data products
• Integrate new data and design workflows
• Innovate, share, and educate team members and community
Requirements:
• Working experience in machine learning, NLP, recommendation systems, experimentation, or related fields, with a specialization in NLP • Working experience on large language models that cater to multiple tasks such as text generation, Q&A, summarization, translation etc is highly preferred
• Knowledge on MLOPs and deployment pipelines is a must
• Expertise on supervised, unsupervised and reinforcement ML algorithms.
• Strong programming skills in Python
• Top data wrangling skills using SQL or NOSQL queries
• Experience using containers to deploy real-time prediction services
• Passion for using technology to help students
• Excellent communication skills
• Good team player and a self-starter
• Outstanding analytical and problem-solving skills
• Experience working with ML pipeline products such as AWS Sagemaker, Google ML, or Databricks a plus.
Why do we exist?
Students are working harder than ever before to stabilize their future. Our recent research study called State of the Student shows that nearly 3 out of 4 students are working to support themselves through college and 1 in 3 students feel pressure to spend more than they can afford. We founded our business on provided affordable textbook rental options to address these issues. Since then, we’ve expanded our offerings to supplement many facets of higher educational learning through Chegg Study, Chegg Math, Chegg Writing, Chegg Internships, Thinkful Online Learning, and more, to support students beyond their college experience. These offerings lower financial concerns for students by modernizing their learning experience. We exist so students everywhere have a smarter, faster, more affordable way to student.
Video Shorts
Life at Chegg: https://jobs.chegg.com/Video-Shorts-Chegg-Services
Certified Great Place to Work!: http://reviews.greatplacetowork.com/chegg
Chegg India: http://www.cheggindia.com/
Chegg Israel: http://insider.geektime.co.il/organizations/chegg
Thinkful (a Chegg Online Learning Service): https://www.thinkful.com/about/#careers
Chegg out our culture and benefits!
http://www.chegg.com/jobs/benefits
https://www.youtube.com/watch?v=YYHnkwiD7Oo
Chegg is an equal-opportunity employer
Expertise in handling large amount of data through Python or PySpark
Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
Track record of strong problem-solving, requirement gathering, and leading by example
Ability to thrive in a flexible and collaborative environment
Track record of completing projects successfully on time, within budget and as per scope
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/python.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/scala.png)
- We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
- The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
- The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
- Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
- You should be able to work in a high-volume environment, have outstanding planning and organisational skills.
Qualifications for Data Engineer
- Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.
What we're looking for?
- Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
- Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
- Experience with data pipeline and workflow management tools: Luigi, Airflow.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming.
- Experience with object-oriented/object function scripting languages: Python, Java, Scala.
We are looking for a savvy Data Engineer to join our growing team of analytics experts.
The hire will be responsible for:
- Expanding and optimizing our data and data pipeline architecture
- Optimizing data flow and collection for cross functional teams.
- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.
Nice to have experience with :
- Big data tools: Hadoop, Spark and Kafka
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow
- Stream-processing systems: Storm
Database : SQL DB
Programming languages : PL/SQL, Spark SQL
Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.
The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/data_analytics.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/python.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/machine_learning.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/python.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/c.png)
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python
Some examples of work:
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/machine_learning.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/python.png)
![skill icon](https://cdn.cutshort.io/public/images/skill_icons/data_science.png)
![icon](https://cdn.cutshort.io/public/images/search.png)
![companies logos](https://cdn.cutshort.io/public/images/hiring_companies_logos-v2.webp)