Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
at Exusia

About Exusia
About
Connect with the team
Similar jobs
Immediate Joiners Preferred
NOTE: Working Shift: 11 am - 8 pm IST (+/- one hour on need basis) Monday to Friday
Responsibilities :
1. Leverage Alteryx to build, maintain, execute and deliver data assets. This entails using data analytics tools (combination of proprietary tools, Excel, and scripting), validating raw data quality, and working with Tableau/ Power BI and other geospatial data visualization applications
2. Analyze large data sets in Alteryx, finding patterns and providing a concise summary
3. Implement, maintain, and troubleshoot analytics solution and derive key insights based on Tableau/ Power BI visualization and data analysis
4. Improve ETL setup and develop components so they can be rapidly deployed and configured
5. Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
Qualification :
1. 3+ years of relevant and professional work experience with a reputed analytics firm
2. Bachelor's degree in Engineering / Information Technology from a reputed college
3. Must have the knowledge to handle/design/optimize complex ETL using Alteryx
4. Expertise with visualization tools such as Tableau /Power BI to solve a business problem related to data exploration and visualization
5. Good knowledge in handling large amounts of data through SQL, T-SQL or PL-SQL
6. Basic knowledge in Python for data processing will be good to have
7. Very good understanding of data warehousing and data lake concepts
We’re seeking a highly skilled, execution-focused Senior Data Scientist with a minimum of 5 years of experience. This role demands hands-on expertise in building, deploying, and optimizing machine learning models at scale, while working with big data technologies and modern cloud platforms. You will be responsible for driving data-driven solutions from experimentation to production, leveraging advanced tools and frameworks across Python, SQL, Spark, and AWS. The role requires strong technical depth, problem-solving ability, and ownership in delivering business impact through data science.
Responsibilities
- Design, build, and deploy scalable machine learning models into production systems.
- Develop advanced analytics and predictive models using Python, SQL, and popular ML/DL frameworks (Pandas, Scikit-learn, TensorFlow, PyTorch).
- Leverage Databricks, Apache Spark, and Hadoop for large-scale data processing and model training.
- Implement workflows and pipelines using Airflow and AWS EMR for automation and orchestration.
- Collaborate with engineering teams to integrate models into cloud-based applications on AWS.
- Optimize query performance, storage usage, and data pipelines for efficiency.
- Conduct end-to-end experiments, including data preprocessing, feature engineering, model training, validation, and deployment.
- Drive initiatives independently with high ownership and accountability.
- Stay up to date with industry best practices in machine learning, big data, and cloud-native deployments.
Requirements:
- Minimum 5 years of experience in Data Science or Applied Machine Learning.
- Strong proficiency in Python, SQL, and ML libraries (Pandas, Scikit-learn, TensorFlow, PyTorch).
- Proven expertise in deploying ML models into production systems.
- Experience with big data platforms (Hadoop, Spark) and distributed data processing.
- Hands-on experience with Databricks, Airflow, and AWS EMR.
- Strong knowledge of AWS cloud services (S3, Lambda, SageMaker, EC2, etc.).
- Solid understanding of query optimization, storage systems, and data pipelines.
- Excellent problem-solving skills, with the ability to design scalable solutions.
- Strong communication and collaboration skills to work in cross-functional teams.
Benefits:
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About Us:
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.
Job Description:
The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible.
Our ideal candidate
The role would be a client facing one, hence good communication skills are a must.
The candidate should have the ability to communicate complex models and analysis in a clear and precise manner.
The candidate would be responsible for:
- Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
- Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
- Understanding the math behind algorithms and choosing one over another
- Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy
Desired technical requirements
- Proficiency with Python and the ability to write production-ready codes.
- Experience in pyspark, machine learning and deep learning
- Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
- Familiarity with SQL or other databases.
Work Timing: 5 Days A Week
Responsibilities include:
• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.
Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
|
Briefly describe Mandatory Skills:
We are looking for a MIS Analyst to participate with other Analysts in the team to convert the data into reports in various forms and thereby provide meaningful insights to the users of the reports and help them in decision making process. Experience Level – 1-2 yrs. of experience as MIS executive/analyst Responsibilities - · Managing the data including creation, updates, and deletion · Collecting the data, interpreting, analyzing, and developing the reports and dashboards · Generating reports from single or multiple systems · Processing confidential data and information according to the guidelines · Create visualizations and reports for requested projects in MS Excel and PowerBI platforms · Develop and update documentation Skills / Requirements - · Strong knowledge and experience in Advance MS Excel (MUST HAVE) · Basic knowledge of reporting tools like Power BI or Tableau · Basic Knowledge of SQL queries · Strong mathematical and analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy · Ability to analyze large data set and create meaningful visualization and reports · Analytical mind with a problem-solving attitude · Excellent communication · BSc/ MSc in Computer Science MSc/MCA, Engineering, or relevant field |
Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.
- Act as primary day-to-day contact on analytics to agency-client leads
- Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
- Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
- Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
- Ensure data quality and integrity
- Interpret and analyse data problems
- Build analytic systems and predictive models
- Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements
- Min 6 - 7 years’ experience working in Data Science
- Prior experience as a Data Scientist within a digital media is desirable
- Solid understanding of machine learning
- A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
- Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
- A knack for statistical analysis and predictive modelling
- Good knowledge of R, Python
- Experience with SQL, MYSQL, PostgreSQL databases
- Knowledge of data management and visualization techniques
- Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
- Evidence of technical comfort and good understanding of internet functionality desirable
- Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
- Proactive and results-oriented
- A positive, can-do attitude with a thirst to continually learn new things
- An ability to work independently and collaboratively with a wide range of teams
- Excellent communication skills, both written and oral
- Core Java: advanced level competency, should have worked on projects with core Java development.
- Linux shell : advanced level competency, work experience with Linux shell scripting, knowledge and experience to use important shell commands
- Rdbms, SQL: advanced level competency, Should have expertise in SQL query language syntax, should be well versed with aggregations, joins of SQL query language.
- Data structures and problem solving: should have ability to use appropriate data structure.
- AWS cloud : Good to have experience with aws serverless toolset along with aws infra
- Data Engineering ecosystem : Good to have experience and knowledge of data engineering, ETL, data warehouse (any toolset)
- Hadoop, HDFS, YARN : Should have introduction to internal working of these toolsets
- HIVE, MapReduce, Spark: Good to have experience developing transformations using hive queries, MapReduce job implementation and Spark Job Implementation. Spark implementation in Scala will be plus point.
- Airflow, Oozie, Sqoop, Zookeeper, Kafka: Good to have knowledge about purpose and working of these technology toolsets. Working experience will be a plus point here.
Location: Pune/Nagpur,Goa,Hyderabad/
Job Requirements:
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
Job Description:
We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
Responsibilities:
- Develop, test, and implement data solutions based on functional / non-functional business requirements.
- You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
- Build Data Models to store the data in a most optimized manner
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Implementing the ETL process and optimal data pipeline architecture
- Monitoring performance and advising any necessary infrastructure changes.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Proactively identify potential production issues and recommend and implement solutions
- Must be able to write quality code and build secure, highly available systems.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Review peer-codes and pipelines before deploying to Production for optimization issues and code standards
Skill Sets:
- Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable data size (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Creation of DAGs for data engineering
- Expert at Python /Scala programming, especially for data engineering/ ETL purposes
We are looking for a savvy Data Engineer to join our growing team of analytics experts.
The hire will be responsible for:
- Expanding and optimizing our data and data pipeline architecture
- Optimizing data flow and collection for cross functional teams.
- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.
Nice to have experience with :
- Big data tools: Hadoop, Spark and Kafka
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow
- Stream-processing systems: Storm
Database : SQL DB
Programming languages : PL/SQL, Spark SQL
Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.
The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.









