Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
About The other Fruit
We believe that privacy and free choice are not mutually exclusive. They reinforce one another. We aim to empower the individual, thereby permitting trusted and honest control-exchange. And by doing the right thing – for our team, our clients, our partners and our communities – it is also the best thing for our business.
Building in-house as well as fully bespoke B2B, B2C and DFA S/P/BaaS solutions - we are lucky enough to partner with some of the leading firms and groups from around the world. TOF®'s own headquarters are in the UK with branch locations in Singapore, Hong Kong, Thailand and Pune.
Similar jobs
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypothesis through theoretical and empirical approaches.
● Productize proven or working models into production quality code.
● Collaborate with product management, marketing and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision makers.
● File patents for innovative solutions that add to company's IP portfolio
Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/PhD in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes (only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Expertise in Python programming language and fluency in analytical tools
such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis
● Strong communication and collaboration skills.
Role: Head of Analytics
Location: Bangalore (Full time)
ABOUT QRATA:
Qrata matches top talent with global career opportunities from the world’s leading digital companies including some of the world’s fastest growing startups using Qrata’s talent marketplaces. To sign-up, please visit Qrata Talent Sign-Up
ABOUT THE COMPANY WE ARE HIRING FOR:
Our client is offering credit card solutions for banks and financial institutions. It provides services like credit card design and onboarding, credit card authorization, payment processing, collections and dispute resolutions, credit card fraud detection, and more. They serve in the B2B space in the FinTech market segments.
POSITION OVERVIEW
We are seeking an experienced individual for the role of Head of Analytics. As the Head of Analytics, you will be responsible for driving data-driven decision-making, implementing advanced analytics strategies, and providing valuable insights to optimize our credit card business operations, sales and marketing, risk management & customer experience. Your expertise in statistical analysis, predictive modeling, and data visualization will be instrumental in driving growth and enhancing the overall performance of our credit card business.
Responsibilities:
1. Develop and implement Analytics Strategy:
o Define the analytics roadmap for the credit card business, aligning it with overall
business objectives.
o Identify key performance indicators (KPIs) and metrics to track the performance
of the credit card business.
o Collaborate with senior management and cross-functional teams to prioritize and
execute analytics initiatives. 2. Lead Data Analysis and Insights:
o Conduct in-depth analysis of credit card data, customer behavior, and market trends to identify opportunities for business growth and risk mitigation.
o Develop predictive models and algorithms to assess credit risk, customer segmentation, acquisition, retention, and upsell opportunities.
o Generate actionable insights and recommendations based on data analysis to optimize credit card product offerings, pricing, and marketing strategies.
o Regularly present findings and recommendations to senior leadership, using data visualization techniques to effectively communicate complex information.
3. Drive Data Governance and Quality:
o Oversee data governance initiatives, ensuring data accuracy, consistency, and
integrity across relevant systems and platforms.
o Collaborate with IT teams to optimize data collection, integration, and storage
processes to support advanced analytics capabilities.
o Establish and enforce data privacy and security protocols to comply with
regulatory requirements.
4. Team Leadership and Collaboration:
o Build and manage a high-performing analytics team, fostering a culture of innovation, collaboration, and continuous learning.
o Provide guidance and mentorship to the team, promoting professional growth and development.
o Collaborate with stakeholders across departments, including Marketing, Risk Management, and Finance, to align analytics initiatives with business objectives.
5. Stay Updated on Industry Trends:
o Keep abreast of emerging trends, techniques, and technologies in analytics, credit
card business, and the financial industry.
o Leverage industry best practices to drive innovation and continuous improvement
in analytics methodologies and tools.
Qualifications:
Bachelor's or master’s degree in Technology, Mathematics, Statistics, Economics, Computer Science, or a related field.
Proven experience (7+ years) in leading analytics teams in the credit card industry.
Strong expertise in statistical analysis, predictive modelling, data mining, and segmentation techniques.
Proficiency in data manipulation and analysis using programming languages such as Python, R, or SQL.
Experience with analytics tools such as SAS, SPSS, or Tableau.
Excellent leadership and team management skills, with a track record of building and developing high-performing teams.
Strong knowledge of credit card business and understanding of credit card industry dynamics, including risk management, marketing, and customer lifecycle.
Exceptional communication and presentation skills, with the ability to effectively communicate complex information to a varied audience.
We are a 20-year old IT Services company from Kolkata working in India and abroad. We primarily work as SSP(Software Solutions Partner) and serve some of the leading business houses in the country in various software project implementations specially on SAP and Oracle platform and also working on Govt & Semi Govt projects as outsourcing partner all over PAN India.
Can be anywhere in India ( Mumbai ,Pune and Kolkata is
preferable)
JD
Machine Learning/Deep Learning experience above 3
years
- Clear and structured thinking and communication
Keywords: Machine Learning, Deep Learning, AI,
Regression, Classification, Clustering, NLP, CNN, RNN,
LSTM, AutoML, k-NN, Naive Bayes, SVM, Decision
Forests
- Understand granular requirement, underlying business
problem and convert to low level design
- Develop analytic process chain with pre-processing,
training, testing, boosting etc.
- Develop the technical deliverable in mcube
(Python/Spark ML/R, H2O/Tensorflow) as per design
- Ensure quality of deliverable (coding standards, data
quality, data reconciliation)
- Proactively reach out for risks to Technical Lead
- Machine Learning, Deep Learning, Regression,
Classification, Clustering, NLP,CNN, RNN
- Expertise in data analysis and analytic programming
(Python/R/ SparkML/Tensorflow)
- Experience in multiple data processing technology
(preferably Pentaho or Spark)
- Basic knowledge in effort estimation, Clear and
structured thinking and communication
- Expertise in testing accuracy of deliverables (model)
- Exposure to Data Modelling and Analysis
- Exposure to information delivery (model outcome
communication)
Qualification:
M.S. / M.Tech/ B.Tech / B.E. (in this order of preference)
- Masters course in Data Science after technical (engineering/
science) degree
Requirements
cutting-edge technology to problems that have never been solved before, working with architecture and
product teams to collaboratively visualize, design and create machine learning models for M3LD- the
world’s first privacy platform that helps users get control of their data and enterprises to establish trustbased relationships.
KEY ACCOUNTABILITIES & ACTIVITIES
AI-ML Software
Engineer
Accountabilities
& Activities
▪ Study and transform data science prototypes
▪ Design machine learning systems
▪ Research and implement appropriate ML algorithms and tools
▪ Develop machine learning applications according to requirements
▪ Select appropriate datasets and data representation methods
▪ Run machine learning tests and experiments
▪ Perform statistical analysis and fine-tuning using test results
▪ Train and retrain systems when necessary
▪ Extend existing ML libraries and frameworks
▪ Keep abreast of developments in the field
BACKGROUND, SKILLS & QUALIFICATIONS
Knowledge,
Skills and
Experience
▪ Ability and passion to deliver extraordinary results with minimal direction
▪ Collaborating with teams to dissect complex problems and design solutions tailored to M3LD
needs
▪ Expertise in big data processing, data pipeline, machine learning, and AI processing methods.
▪ Proficiency in programming languages - e.g. Python, Java, C++, Ruby.
▪ Track-record of shipping and maintaining code in production with a commitment to high quality,
well-tested code, and automation
▪ Experience with object-oriented design, multi-threading, and synchronisation
▪ Experience with ML frameworks and libraries (Spark, tensor, OpenAI)
▪ Ability to optimize runtime with performant data structures, leveraging distributed systems and/or
cloud platforms
▪ Excellent written skills and ability to document system design documentations.
▪ Experience working in “agile” development environment and collaboration tools like Jira,
Confluence, etc.
▪ Ability to communicate effectively with cross project stakeholders both verbally and in writing
▪ Ability to collaborate with distributed teams across time zones
Qualifications ▪ 5+ years of relevant work experience in enterprise, mobile and complex solution development,
and software engineering
▪ Strong computer science fundamentals required
▪ Experience with deep learning frameworks.
▪ Strong experience in programming and statistics.
▪ Experience working with DevOps practices, Git version control, and agile development
approaches.
▪ Experience developing for major cloud platforms, including Azure, AWS, Google Cloud, and
Oracle Cloud Infrastructure is preferred.
What is the role?
You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.
Key Responsibilities
- Design and develop resilient data pipelines.
- Write efficient queries to fetch data from the report database.
- Work closely with application backend engineers on data requirements for their stories.
- Designing and developing report APIs for the front end to consume.
- Focus on building highly available, fault-tolerant report systems.
- Constantly improve the architecture of the application by clearing the technical backlog.
- Adopt a culture of learning and development to constantly keep pace with and adopt new technolgies.
What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
- Education - BE/MCA or equivalent
- Overall 8+ years of experience
- Expert level understanding of database concepts and BI.
- Well verse in databases such as MySQL, MongoDB and hands on experience in creating data models.
- Must have designed and implemented low latency data warehouse systems.
- Must have strong understanding of Kafka and related systems.
- Experience in clickhouse database preferred.
- Must have good knowledge of APIs and should be able to build interfaces for frontend engineers.
- Should be innovative and communicative in approach
- Will be responsible for functional/technical track of a project
Whom will you work with?
You will work with a top-notch tech team, working closely with the CTO and product team.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
Xoxoday is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Xoxoday offers a suite of three products - Plum, Empuls, and Compass. Xoxoday works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Xoxoday is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
Key Responsibilities : ( Data Developer Python, Spark)
Exp : 2 to 9 Yrs
Development of data platforms, integration frameworks, processes, and code.
Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages
Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.
Elaborate stories in a collaborative agile environment (SCRUM or Kanban)
Familiarity with cloud platforms like GCP, AWS or Azure.
Experience with large data volumes.
Familiarity with writing rest-based services.
Experience with distributed processing and systems
Experience with Hadoop / Spark toolsets
Experience with relational database management systems (RDBMS)
Experience with Data Flow development
Knowledge of Agile and associated development techniques including:
n
- We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
- The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
- The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
- Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
- You should be able to work in a high-volume environment, have outstanding planning and organisational skills.
Qualifications for Data Engineer
- Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.
What we're looking for?
- Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
- Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
- Experience with data pipeline and workflow management tools: Luigi, Airflow.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming.
- Experience with object-oriented/object function scripting languages: Python, Java, Scala.
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
Dear Candidate,,
Greetings of the day!
As discussed, Please find the below job description.
Job Title : Hadoop developer
Experience : 3+ years
Job Location : New Delhi
Job type : Permanent
Knowledge and Skills Required:
Brief Skills:
Hadoop, Spark, Scala and Spark SQL
Main Skills:
- Strong experience in Hadoop development
- Experience in Spark
- Experience in Scala
- Experience in Spark SQL
Why OTSi!
Working with OTSi gives you the assurance of a successful, fast-paced career.
Exposure to infinite opportunities to learn and grow, familiarization with cutting-edge technologies, cross-domain experience and a harmonious environment are some of the prime attractions for a career-driven workforce.
Join us today, as we assure you 2000+ friends and a great career; Happiness begins at a great workplace..!
Feel free to refer this opportunity to your friends and associates.
About OTSI: (CMMI Level 3): Founded in 1999 and headquartered in Overland Park, Kansas, OTSI offers global reach and local delivery to companies of all sizes, from start-ups to Fortune 500s. Through offices across the US and around the world, we provide universal access to exceptional talent and innovative solutions in a variety of delivery models to reduce overall risk while optimizing outcomes & enabling our customers to thrive in a global economy.http://otsi-usa.com/?page_id=2806">
OTSI's global presence, scalable and sustainable world-class infrastructure, business continuity processes, ISO 9001:2000, CMMI 3 certifications makes us a preferred service provider for our clients. OTSI has the expertise in different technologies enhanced by our http://otsi-usa.com/?page_id=2933">partnerships and alliances with industry giants like HP, Microsoft, IBM, Oracle, and SAP and others. Highly repetitive local company with a proven success of serving the UAE Government IT needs is seeking to attract, employ and develop people with exceptional skills who want to make a difference in a challenging environment.Object Technology Solutions India Pvt Ltd is a leading Global Information Technology (IT) Services and Solutions company offering a wide array of Solutions for a range of key Verticals. The company is headquartered in Overland Park, Kansas, and has a strong presence in US, Europe and Asia-Pacific with a Global Delivery Center based in India. OTSI offers a broad range of IT application solutions and services including; e-Business solutions, Enterprise Resource Planning (ERP) implementation and Post Implementation Support, Application development, Application Maintenance, Software customizations services.
OTSI Partners & Practices
- SAP Partner
- Microsoft Silver Partner
- Oracle Gold Partner
- Microsoft CoE
- DevOps Consulting
- Cloud
- Mobile & IoT
- Digital Transformation
- Big data & Analytics
- Testing Solutions
OTSI Honor’s & Awards:
- #91 in Inc.5000 .
- Fastest growing IT Companies in Inc.5000…