Cutshort logo
Softobiz Technologies Private limited logo
Sr Data Engineer or Data Architect
Sr Data Engineer or Data Architect
Softobiz Technologies Private limited's logo

Sr Data Engineer or Data Architect

Adlin Asha's profile picture
Posted by Adlin Asha
8 - 18 yrs
₹15L - ₹30L / yr
Hyderabad
Skills
Data Warehouse (DWH)
Informatica
ETL
Amazon Redshift
skill iconPostgreSQL
Business Intelligence (BI)
skill iconAmazon Web Services (AWS)

Experience: 8+ Years

Work Location: Hyderabad

Mode of work: Work from Office


Senior Data Engineer / Architect

 

Summary of the Role

 

The Senior Data Engineer / Architect will be a key role within the data and technology team, responsible for engineering and building data solutions that enable seamless use of data within the organization. 

 

Core Activities

-         Work closely with the business teams and business analysts to understand and document data usage requirements

-         Develop designs relating to data engineering solutions including data pipelines, ETL, data warehouse, data mart and data lake solutions

-         Develop data designs for reporting and other data use requirements

-         Develop data governance solutions that provide data governance services including data security, data quality, data lineage etc.

-         Lead implementation of data use and data quality solutions

-         Provide operational support for users for the implemented data solutions

-         Support development of solutions that automate reporting and business intelligence requirements

-         Support development of machine learning and AI solution using large scale internal and external datasets

 

Other activities

-         Work on and manage technology projects as and when required

-         Provide user and technical training on data solutions

 

Skills and Experience

-         At least 5-8 years of experience in a senior data engineer / architect role

-         Strong experience with AWS based data solutions including AWS Redshift, analytics and data governance solutions

-         Strong experience with industry standard data governance / data quality solutions  

-         Strong experience with managing a Postgres SQL data environment

-         Background as a software developer working in AWS / Python will be beneficial

-         Experience with BI tools like Power BI and Tableau

Strong written and oral communication skills

 

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Softobiz Technologies Private limited

Founded :
2007
Type
Size :
100-1000
Stage :
Profitable
About

Softobiz is a leading Service & Product development company that has been developing custom-made mobile and web solutions for businesses for the last 15 years.

Read more
Tech Stack
Dot Net
skill iconReact.js
skill iconReact Native
Microsoft Dynamics
skill iconNodeJS (Node.js)
skill iconJava
Test Automation (QA)
Mobile App Development
skill iconAmazon Web Services (AWS)
Windows Azure
Integration
Selenium
Microsoft Visual Studio
ASP.NET
skill iconPHP
Candid answers by the company
What does the company do?
What is the location preference of jobs?

Softobiz is a software development company that provides custom software development, application maintenance and support, mobile app development, web development, cloud computing services, DevOps consulting, agile development, and quality assurance services. The company was founded in 2007 and is headquartered in Mohali, Punjab, Hyderabad India. Softobiz has a team of over 350 experienced software engineers who are experts in a variety of technologies, including Java, .NET, PHP, and mobile development.

Connect with the team
Parameswar Reddy
Company social profiles
linkedin

Similar jobs

ZeMoSo Technologies
at ZeMoSo Technologies
11 recruiters
HR Team
Posted by HR Team
Remote only
3 - 6 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+5 more

Job Description: 

Machine Learning / AI Engineer (with 3+ years of experience)


We are seeking a highly skilled and passionate Machine Learning / AI Engineer to join our newly established data science practice area. In this role, you will primarily focus on working with Large Language Models (LLMs) and contribute to building generative AI applications. This position offers an exciting opportunity to shape the future of AI technology while charting an interesting career path within our organization.


Responsibilities:


1. Develop and implement machine learning models: Utilize your expertise in machine learning and artificial intelligence to design, develop, and deploy cutting-edge models, with a particular emphasis on Large Language Models (LLMs). Apply your knowledge to solve complex problems and optimize performance.


2. Building generative AI applications: Collaborate with cross-functional teams to conceptualize, design, and build innovative generative AI applications. Work on projects that push the boundaries of AI technology and deliver impactful solutions to real-world problems.


3. Data preprocessing and analysis: Collect, clean, and preprocess large volumes of data for training and evaluation purposes. Conduct exploratory data analysis to gain insights and identify patterns that can enhance the performance of AI models.


4. Model training and evaluation: Develop robust training pipelines for machine learning models, incorporating best practices in model selection, feature engineering, and hyperparameter tuning. Evaluate model performance using appropriate metrics and iterate on the models to improve accuracy and efficiency.


5. Research and stay up to date: Keep abreast of the latest advancements in machine learning, natural language processing, and generative AI. Stay informed about industry trends, emerging techniques, and open-source libraries, and apply relevant findings to enhance the team's capabilities.


6. Collaborate and communicate effectively: Work closely with a multidisciplinary team of data scientists, software engineers, and domain experts to drive AI initiatives. Clearly communicate complex technical concepts and findings to both technical and non-technical stakeholders.


7. Experimentation and prototyping: Explore novel ideas, experiment with new algorithms, and prototype innovative solutions. Foster a culture of innovation and contribute to the continuous improvement of AI methodologies and practices within the organization.


Requirements:


1. Education: Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Relevant certifications in machine learning, deep learning, or AI are a plus.


2. Experience: A minimum of 3+ years of professional experience as a Machine Learning / AI Engineer, with a proven track record of developing and deploying machine learning models in real-world applications.


3. Strong programming skills: Proficiency in Python and experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, pandas). Experience with cloud platforms (e.g., AWS, Azure, GCP) for model deployment is preferred.


4. Deep-learning expertise: Strong understanding of deep learning architectures (e.g., convolutional neural networks, recurrent neural networks, transformers) and familiarity with Large Language Models (LLMs) such as GPT-3, GPT-4, or equivalent.


5. Natural Language Processing (NLP) knowledge: Familiarity with NLP techniques, including tokenization, word embeddings, named entity recognition, sentiment analysis, text classification, and language generation.


6. Data manipulation and preprocessing skills: Proficiency in data manipulation using SQL and experience with data preprocessing techniques (e.g., cleaning, normalization, feature engineering). Familiarity with big data tools (e.g., Spark) is a plus.


7. Problem-solving and analytical thinking: Strong analytical and problem-solving abilities, with a keen eye for detail. Demonstrated experience in translating complex business requirements into practical machine learning solutions.


8. Communication and collaboration: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to diverse stakeholders


Read more
Carsome
at Carsome
3 recruiters
Piyush Palkar
Posted by Piyush Palkar
Remote, Kuala Lumpur
2 - 5 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconDjango
skill iconFlask
TensorFlow
+2 more
Carsome is a growing startup that is utilising data to improve the experience of second hand car shoppers. This involves developing, deploying & maintaining machine learning models that are used to improve our customers' experience. We are looking for candidates who are aware of the machine learning project lifecycle and can help managing ML deployments.

Responsibilities: - Write and maintain production level code in Python for deploying machine learning models - Create and maintain deployment pipelines through CI/CD tools (preferribly GitLab CI) - Implement alerts and monitoring for prediction accuracy and data drift detection - Implement automated pipelines for training and replacing models - Work closely with with the data science team to deploy new models to production Required Qualifications: - Degree in Computer Science, Data Science, IT or a related discipline. - 2+ years of experience in software engineering or data engineering. - Programming experience in Python - Experience in data profiling, ETL development, testing and implementation - Experience in deploying machine learning models

Good to have: - Experience in AWS resources for ML and data engineering (SageMaker, Glue, Athena, Redshift, S3) - Experience in deploying TensorFlow models - Experience in deploying and managing ML Flow
Read more
Curl
at Curl
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹25L / yr
Data Visualization
PowerBI
ETL
Business Intelligence (BI)
skill iconData Analytics
+6 more
Main Responsibilities:

 Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
 Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
 Solve data consumption and visualization through data as a service distribution model
 Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
 Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
 Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
 Provide support and maintain the availability of BI applications irrespective of the hosting location
 Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
 Work with strict deadlines on high priority regulatory reports
 Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
 To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

General Skills:
 Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
 Experience with industrialized reporting on BI tools like PowerBI, Informatica
 Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
 Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
 5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
 Ability to multi-task and manage various projects simultaneously
 Attention to detail
 Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
Read more
Career Forge
at Career Forge
2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
SmartHub Innovation Pvt Ltd
Sathya Venkatesh
Posted by Sathya Venkatesh
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

JD Code: SHI-LDE-01 

Version#: 1.0 

Date of JD Creation: 27-March-2023 

Position Title: Lead Data Engineer 

Reporting to: Technical Director 

Location: Bangalore Urban, India (on-site) 

 

SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our “Software Defined Edge” products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies. 

 

SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.  

Summary 

This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases. 

We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports. 

Responsibilities 

  • Provide technical & architectural leadership for the next generation of product development. 
  • Innovate, Research & Evaluate new technologies and tools for a quality output. 
  • Architect, Design and Implement ensuring scalability, performance and security. 
  • Code and implement new algorithms to solve complex problems. 
  • Analyze complex data, develop, optimize and transform large data sets both structured and unstructured. 
  • Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as Kubernetes  
  • Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding. 
  • Work in an Agile development environment while continuously evaluating and improvising engineering processes 

Required 

  • At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies. 
  • Should have been feature/component lead for several complex features involving large datasets. 
  • Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl. 
  • Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to have 
  • Excellent analytical and problem-solving skill sets. 
  • Experience in  for high throughput is highly desirable 
  • Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode. 
  • Demonstrated ability to provide technical leadership and mentoring to the team 


Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
skill iconJava
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Hyderabad
5 - 8 yrs
₹12L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Big Data Engineer: 5+ yrs.
Immediate Joiner

 

  • Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
  • Experience in developing lambda functions with AWS Lambda
  • Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
  • Should be able to code in Python and Scala.
  • Snowflake experience will be a plus
  • We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.
Read more
DFCS Technologies
Agency job
via dfcs Technologies by SheikDawood Ali
Remote, Chennai, Anywhere India
1 - 5 yrs
₹9L - ₹14L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big
    • data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Pluto Seven Business Solutions Pvt Ltd
Sindhu Narayan
Posted by Sindhu Narayan
Bengaluru (Bangalore)
2 - 3 yrs
₹1L - ₹3L / yr
PowerBI
Microsoft SQL Server
Architecture
Business Intelligence (BI)
SQL Server Integration Services (SSIS)
Power BI Experience: 2-3 years Skills: Power BI Start date : Looking out for Immediate Joiners Contract position for 3 months Location Bangalore Job Designation Power BI Developer : 1. To Develop Power BI reports & effective dashboards after gathering and translating end user requirements. 2. To have Ability to turn large amounts of raw data into actionable information. 3. Be responsible for the performance and usability of the system 4. To Perform optimization of MS SQL Queries and SSIS packages 5. To Provide solutions using SSAS cubes (Tabular/Multidimensional Models) 6. Must have excellent experience in MS SQL Server - writing queries, joints, stored procedures, functions etc Candidate should have the knowledge of: Power BI with DAX query, Multiple Data Sources and Relationships of data Power BI Architecture Data modelling, Calculations, Conversions, Scheduling Data refreshes in Power BI Desktop Hands-on professional with thorough knowledge of scripting, data source integration and advanced GUI development in Power BI. Familiar with Power BI custom add-on development & deployment. Connecting Power BI with on-premise and cloud computing platforms. To provide optimized SQL Scripts for complex scenarios Business Intelligence Systems & Tools (Microsoft Power BI, SQL Server Reporting Services) Good understanding of the processes of data quality, data cleansing and data transformation
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos