11+ ACL Jobs in Mumbai | ACL Job openings in Mumbai
Apply to 11+ ACL Jobs in Mumbai on CutShort.io. Explore the latest ACL Job opportunities across top companies like Google, Amazon & Adobe.
Job Role : Audit Analytics – Need from Non Banking domain
Job Location : Gurgaon / Mumbai / B’lore
- Understanding of business processes and potential risk scenarios.
- Ability to conceptualize appropriate logic for analyzing potential risk scenarios
- Ability to understand requirements clearly and to be flexible in learning new data sources and technologies, meeting tight deadlines, and delivering quality reports for auditors.
- Maintain strong client focus by building positive relationships with clients, scheduling, conducting and presenting on key client meetings.
- Should be able to write/optimize complex scripts in the technology of expertise. Should be able to review results and identify false positives basis business understanding
- Should be a self-starter and eager to tackle business problem using experience and skills
- Play a key role in the development of less expert staff through mentoring, training and advising.
- 30% Travel in India and Overseas, if required
- Excellent communication skills and willingness to stretch and multi-task
- May be assigned on a project on a long term basis.
- Responsibilities include managing projects involving audit analytics and continuous control monitoring.
- Understanding of business process (Accounts Payable, Revenue, Fixed Asset, Inventory, MJEs) from analytics requirements
- Understanding of ERPs (SAP\ JDE\ Oracle\ Concur etc.) – Techno Functional side (Tables and Reports)
Qualifications
Minimum qualifications
- Preferred Post Graduates– MCom\ MSc (IT)\ MBA (IT)\ BE
- Years of experience in related field of Audit\ Business \ Financial analytics (Non-banking).
- Working knowledge of analytical / BI tools –
- ACL, SQL / R / Python, Alteryx – Should have any 2 or more
- VBA, Power BI / Tableau / QlikView
- GRC Solutions, AWS/Azure cloud based analytical solutions – Good to have
- Have worked on data analytics support work either as tool implementation, automation of control or MIS development
LogiNext is looking for a technically savvy and passionate Software Engineer - Data Science to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions. Without knowledge of how the software works, data scientists might have difficulty in work. Apart from experience in developing R and Python, they must know modern approaches to software development and their impact. DevOps continuous integration and deployment, experience in cloud computing are everyday skills to manage and process data.
Responsibilities:
Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams
Requirements:
Bachelors degree or higher in Computer Science, Information Technology, Information Systems, Statistics, Mathematics, Commerce, Engineering, Business Management, Marketing or related field from top-tier school 2 to 3 year experince in in data mining, data modeling, and reporting. Understading of SaaS based products and services. Understanding of machine-learning and operations research Experience of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen and problem-solving aptitude Excellent communication and presentation skills Proficiency in Excel for data management and manipulation Experience in statistical modeling techniques and data wrangling Able to work independently and set goals keeping business objectives in mind
Data Scientist – Delivery & New Frontiers Manager
Job Description:
We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role.
Responsibilities:
- Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities.
- Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform.
- Collect, clean, and preprocess large datasets from various internal and external sources.
- Streamlining data science process working with Data Engineering, and Technology teams.
- Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.
- Develop and maintain data pipelines and infrastructure to support the data science projects
- Communicate findings and recommendations to stakeholders through data visualizations and presentations.
- Stay up to date with the latest data science trends and technologies, specifically for GCP companies
Education / Certifications:
Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics.
Job specific requirements:
- Brings 5+ years of deep data science experience
∙ Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon
- Experience with programming languages such as Python, R, Spark
- Experience with data visualization tools such as Tableau, Power BI, and D3.js
- Strong understanding of data structures, algorithms, and software design principles
- Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage
- Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub.
- Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs.
- Strong analytical and problem-solving skills
- Excellent verbal and written communication skills
- Working knowledge with application architecture, data security and compliance team.
- You're proficient in AI/Machine learning latest technologies
- You're proficient in GPT-3 based algorithms
- You have a passion for writing code as well as understanding and crafting the ways systems interact
- You believe in the benefits of agile processes and shipping code often
- You are pragmatic and work to coalesce requirements into reasonable solutions that provide value
Responsibilities
- Deploy well-tested, maintainable and scalable software solutions
- Take end-to-end ownership of the technology stack and product
- Collaborate with other engineers to architect scalable technical solutions
- Embrace and improve our standards and processes to reduce friction and unlock efficiency
Current Ecosystem :
ShibaSwap : https://shibaswap.com/#/" target="_blank">https://shibaswap.com/#/
Metaverse : https://shib.io/#/" target="_blank">https://shib.io/#/
NFTs : https://opensea.io/collection/theshiboshis" target="_blank">https://opensea.io/collection/theshiboshis
Game : Shiba Eternity on iOS and Android
About Us
Railofy aims at solving one of the biggest challenges of Indian domestic travellers – Waitlist problem in Indian Railways.
We are a small team of technologists who have made a significant breakthrough progress in this field using AI, so that no waitlisted railway passenger has to ever miss a trip again.
We are backed by some of the leading VC investors in India (learn more about us on railofy.com)
About the role-
*Must be train smart* (Indian Railways)
As a Data Analyst, you will work with the database and data-science team in managing and updating the ever changing database related to trains, airports and buses etc.
You will also play a key role in monitoring the key KPIs.
Needless to say, you will be exposed to almost all the verticals of a tech startup and play a key role in solving real life problems.
Key Requirements
- Minimum 1 year of experience managing databases, MIS reporting or analytics (non-technical)
- Should be ‘ Train Smart ‘. Knowledge of Indian Railways and trains is required
- Hands-on with MS excel and google sheets
- Access to laptop & internet to work from home
- Willing to relocate to Mumbai
- Graduation (completed) from reputed school / college (any field)
- Good problem-solving skills and ability to think strategically
- Motivation to join an early stage startup and should go beyond compensation
- Should be always open for new learnings and responsibilities
Job Overview
We are looking for a Data Engineer to join our data team to solve data-driven critical
business problems. The hire will be responsible for expanding and optimizing the existing
end-to-end architecture including the data pipeline architecture. The Data Engineer will
collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is
consistent throughout ongoing projects. The right candidate should have hands on in
developing a hybrid set of data-pipelines depending on the business requirements.
Responsibilities
- Develop, construct, test and maintain existing and new data-driven architectures.
- Align architecture with business requirements and provide solutions which fits best
- to solve the business problems.
- Build the infrastructure required for optimal extraction, transformation, and loading
- of data from a wide variety of data sources using SQL and Azure ‘big data’
- technologies.
- Data acquisition from multiple sources across the organization.
- Use programming language and tools efficiently to collate the data.
- Identify ways to improve data reliability, efficiency and quality
- Use data to discover tasks that can be automated.
- Deliver updates to stakeholders based on analytics.
- Set up practices on data reporting and continuous monitoring
Required Technical Skills
- Graduate in Computer Science or in similar quantitative area
- 1+ years of relevant work experience as a Data Engineer or in a similar role.
- Advanced SQL knowledge, Data-Modelling and experience working with relational
- databases, query authoring (SQL) as well as working familiarity with a variety of
- databases.
- Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
- architectures.
- Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
- Experience with orchestrating tool like Airflow or similar
- Experience with Azure Data Factory is good to have
- Build processes supporting data transformation, data structures, metadata,
- dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic
- environment.
- Good understanding of Git workflow, Test-case driven development and using CICD
- is good to have
- Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
- the following software/tools:
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases
- Experience with cloud data services
- Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers. Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
What you will do:
- Reviewing the portfolio monitoring/ early warning signals mechanism on ongoing basis
- Monitoring internal and external data points that may affect the risk level of a decision
- Aggregating data from multiple sources to provide a comprehensive assessment
- Coming up with the solution to reduce risks
- Bringing fresh ideas to the table and being keen observers of trends on analytics and financial services industry
- Creating reports, summaries, presentations and process documents to display results
Desired Candidate Profile
What you need to have:- MBA/BE/ Masters Statistics/ Mathematics, with work experience of 1-5 years in a similar company or related field
- Work experience with analytics consulting into financial services Indian Banks/ NBFCs in-house analytics units or Fintech/analytics start-ups would be a plus
- Minimum 3-4 years of experience with ETL tools, SQL, SSAS & SSIS
- Good understanding of Data Governance, including Master Data Management (MDM) and Data Quality tools and processes
- Knowledge of programming languages eg. JASON, Python, R
- Hands on experience of SQL database design
- Experience working with REST API
- Influencing and supporting project delivery through involvement in project/sprint planning and QA
- Working experience with Azure
- Stakeholder management
- Good communication skills
at Persistent Systems
We are hiring for Senior Data Architect for a reputed company
Experience required- 10-19 yrs
Skills required- Having hands on experience on Kafka, Stored procedures, Snowflakes.
Role :
- Understand and translate statistics and analytics to address business problems
- Responsible for helping in data preparation and data pull, which is the first step in machine learning
- Should be able to do cut and slice data to extract interesting insights from the data
- Model development for better customer engagement and retention
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Working on strategy development to increase business revenue
Requirements:
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Statistics: Strong knowledge of statistics
- Should able to do data scraping & Data mining
- Be self-driven, and show ability to deliver on ambiguous projects
- An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
- Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
Lead and drive the development in BI domain using Tableau eco-system with deep technical and BI ecosystem knowledge. The resource will be responsible for the dashboard design, development, and delivery of BI services using Tableau eco-system.
Key functions & responsibilities:
Communication & interaction with the Project Manager to understand the requirement
Dashboard designing, development and deployment using Tableau eco-system
Ensure delivery within a given time frame while maintaining quality
Stay up to date with current tech and bring relevant ideas to the table
Proactively work with the Management team to identify and resolve issues
Performs other related duties as assigned or advised
He/she should be a leader that sets the standard and expectations through an example in
his/her conduct, work ethic, integrity and character
Contribute in dashboard designing, R&D and project delivery using Tableau
Candidate’s Profile
Academics:
Batchelor’s degree preferable in Computer science.
Master’s degree would have an added advantage.
Experience:
Overall 2-5 Years of experience in DWBI development projects, having worked on BI and
Visualization technologies (Tableau, Qlikview) for at least 2 years.
At least 2 years of experience covering Tableau implementation lifecycle including hands-on development/programming, managing security, data modelling, data blending, etc.
Technology & Skills:
Hands on expertise of Tableau administration and maintenance
Strong working knowledge and development experience with Tableau Server and Desktop
Strong knowledge in SQL, PL/SQL and Data modelling
Knowledge of databases like Microsoft SQL Server, Oracle, etc.
Exposure to alternate Visualization technologies like Qlikview, Spotfire, Pentaho etc.
Good communication & Analytical skills with Excellent creative and conceptual thinking
abilities
Superior organizational skills, attention to detail/level of quality, Strong communication
skills, both verbal and written