11+ Microsoft SSIS Jobs in Mumbai | Microsoft SSIS Job openings in Mumbai
Apply to 11+ Microsoft SSIS Jobs in Mumbai on CutShort.io. Explore the latest Microsoft SSIS Job opportunities across top companies like Google, Amazon & Adobe.
Should be able to use the transformations components to transform the data
Should possess knowledge on incremental load, full load etc.
Should Design, build and deploy effective packages
Should be able to schedule these packages through task schedulers
Implement stored procedures and effectively query a database
Translate requirements from the business and analyst into technical code
Identify and test for bugs and bottlenecks in the ETL solution
Ensure the best possible performance and quality in the packages
Provide support and fix issues in the packages
Writes advanced SQL including some query tuning
Experience in the identification of data quality
Some database design experience is helpful
Experience designing and building complete ETL/SSIS processes moving and transforming data for
ODS, Staging, and Data Warehousing
PLSQL Developer
experience of 4 to 6 years
Skills- MS SQl Server and Oracle, AWS or Azure
• Experience in setting up RDS service in cloud technologies such as AWS or Azure
• Strong proficiency with SQL and its variation among popular databases
• Should be well-versed in writing stored procedures, functions, packages, using collections,
• Skilled at optimizing large, complicated SQL statements.
• Should have worked in migration projects.
• Should have worked on creating reports.
• Should be able to distinguish between normalized and de-normalized data modelling designs and use cases.
• Knowledge of best practices when dealing with relational databases
• Capable of troubleshooting common database issues
• Familiar with tools that can aid with profiling server resource usage and optimizing it.
• Proficient understanding of code versioning tools such as Git and SVN
Your key responsibilities
- Create and maintain optimal data pipeline architecture. Should have experience in building batch/real-time ETL Data Pipelines. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- The individual will be responsible for solution design, integration, data sourcing, transformation, database design and implementation of complex data warehousing solutions.
- Responsible for development, support, maintenance, and implementation of a complex project module
- Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
- Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
- Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
- Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support.
- complete reporting solutions.
- Preparation of HLD about architecture of the application and high level design.
- Preparation of LLD about job design, job description and in detail information of the jobs.
- Preparation of Unit Test cases and execution of the same.
- Provide technical guidance and mentoring to application development teams throughout all the phases of the software development life cycle
Skills and attributes for success
- Strong experience in SQL. Proficient in writing performant SQL working with large data volumes. Proficiency in writing and debugging complex SQLs.
- Strong experience in database system Microsoft Azure. Experienced in Azure Data Factory.
- Strong in Data Warehousing concepts. Experience with large-scale data warehousing architecture and data modelling.
- Should have enough experience to work on Power Shell Scripting
- Able to guide the team through the development, testing and implementation stages and review the completed work effectively
- Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation
- Primary owner of delivery, timelines. Review code was written by other engineers.
- Maintain highest levels of development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability
- Must have understanding of business intelligence development in the IT industry
- Outstanding written and verbal communication skills
- Should be adept in SDLC process - requirement analysis, time estimation, design, development, testing and maintenance
- Hands-on experience in installing, configuring, operating, and monitoring CI/CD pipeline tools
- Should be able to orchestrate and automate pipeline
- Good to have : Knowledge of distributed systems such as Hadoop, Hive, Spark
To qualify for the role, you must have
- Bachelor's Degree in Computer Science, Economics, Engineering, IT, Mathematics, or related field preferred
- More than 6 years of experience in ETL development projects
- Proven experience in delivering effective technical ETL strategies
- Microsoft Azure project experience
- Technologies: ETL- ADF, SQL, Azure components (must-have), Python (nice to have)
Ideally, you’ll also have
Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers.
- Performing extensive analysis on SQL, Google Analytics & Excel from a product standpoint to provide quick recommendations to the management
- Establishing scalable, efficient and automated processes to deploy data analytics on large data sets across platforms
What you need to have:
- B.Tech /B.E.; Any Graduation
- Strong background in statistical concepts & calculations to perform analysis/ modeling
- Proficient in SQL and other BI tools like Tableau, Power BI etc.
- Good knowledge of Google Analytics and any other web analytics platforms (preferred)
- Strong analytical and problem solving skills to analyze large quantum of datasets
- Ability to work independently and bring innovative solutions to the team
- Experience of working with a start-up or a product organization (preferred)
- Key responsibility is to design and develop a data pipeline including the architecture, prototyping, and development of data extraction, transformation/processing, cleansing/standardizing, and loading in Data Warehouse at real-time/near the real-time frequency. Source data can be structured, semi-structured, and/or unstructured format.
- Provide technical expertise to design efficient data ingestion solutions to consolidate data from RDBMS, APIs, Messaging queues, weblogs, images, audios, documents, etc of Enterprise Applications, SAAS applications, external 3rd party sites or APIs, etc through ETL/ELT, API integrations, Change Data Capture, Robotic Process Automation, Custom Python/Java Coding, etc
- Development of complex data transformation using Talend (BigData edition), Python/Java transformation in Talend, SQL/Python/Java UDXs, AWS S3, etc to load in OLAP Data Warehouse in Structured/Semi-structured form
- Development of data model and creating transformation logic to populate models for faster data consumption with simple SQL.
- Implementing automated Audit & Quality assurance checks in Data Pipeline
- Document & maintain data lineage to enable data governance
- Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
Requirements
- Programming experience using Python / Java, to create functions / UDX
- Extensive technical experience with SQL on RDBMS (Oracle/MySQL/Postgresql etc) including code optimization techniques
- Strong ETL/ELT skillset using Talend BigData Edition. Experience in Talend CDC & MDM functionality will be an advantage.
- Experience & expertise in implementing complex data pipelines, including semi-structured & unstructured data processing
- Expertise to design efficient data ingestion solutions to consolidate data from RDBMS, APIs, Messaging queues, weblogs, images, audios, documents, etc of Enterprise Applications, SAAS applications, external 3rd party sites or APIs, etc through ETL/ELT, API integrations, Change Data Capture, Robotic Process Automation, Custom Python/Java Coding, etc
- Good understanding & working experience in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) and cloud-native Data Lake (S3, ADLS, BigQuery, etc) solutions
- Familiarity with AWS tool stack for Storage & Processing. Able to recommend the right tools/solutions available to address a technical problem
- Good knowledge of database performance and tuning, troubleshooting, query optimization, and tuning
- Good analytical skills with the ability to synthesize data to design and deliver meaningful information
- Good knowledge of Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model
- Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
- Ability to understand business functionality, processes, and flows
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently
Functional knowledge
- Data Governance & Quality Assurance
- Distributed computing
- Linux
- Data structures and algorithm
- Unstructured Data Processing
Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification
Senior Analytics Consultant- Responsibilities
- Understand business problem and requirements by building domain knowledge and translate to data science problem
- Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
- Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
- Prototype and experiment the solution to successfully demonstrate the value
Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines - Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
- Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
About Us |
|
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
|
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
Ganit Inc. is the fastest growing Data Science & AI company in Chennai.
Founded in 2017, by 3 industry experts who are alumnus of IITs/SPJIMR with each of them having 17+ years of experience in the field of analytics.
We are in the business of maximising Decision Making Power (DMP) for companies by providing solutions at the intersection of hypothesis based analytics, discovery based AI and IoT. Our solutions are a combination of customised services and functional product suite.
We primarily operate as a US-based start-up and have clients across US, Asia-Pacific, Middle-East and have offices in USA - New Jersey & India - Chennai.
Started with 3 people, the company is fast growing with 100+ employees
1. What do we expect from you
- Should posses minimum 2 years of experience of data analytics model development and deployment
- Skills relating to core Statistics & Mathematics.
- Huge interest in handling numbers
- Ability to understand all domains in businesses across various sectors
- Natural passion towards numbers, business, coding, visualisation
2. Necessary skill set:
- Proficient in R/Python, Advanced Excel, SQL
- Should have worked with Retail/FMCG/CPG projects solving analytical problems in Sales/Marketing/Supply Chain functions
- Very good understanding of algorithms, mathematical models, statistical techniques, data mining, like Regression models, Clustering/ Segmentation, time series forecasting, Decision trees/Random forest, etc.
- Ability to choose the right model for the right data and translate that into code in R, Python, VBA (Proven capabilities)
- Should have handled large datasets and with through understanding of SQL
- Ability to handle a team of Data Analysts
3. Good to have skill set:
- Microsoft PowerBI / Tableau / Qlik View / Spotfire
4. Job Responsibilities:
- Translate business requirements into technical requirements
- Data extraction, preparation and transformation
- Identify, develop and implement statistical techniques and algorithms that address business challenges and adds value to the organisation
- Create and implement data models
- Interact with clients for queries and delivery adoption
5. Screening Methodology
- Problem Solving round (Telephonic Conversation)
- Technical discussion round (Telephonic Conversation)
- Final fitment discussion (Video Round
What's the role?
Your role as a Principal Engineer will involve working with various team. As a principal engineer, will need full knowledge of the software development lifecycle and Agile methodologies. You will demonstrate multi-tasking skills under tight deadlines and constraints. You will regularly contribute to the development of work products (including analyzing, designing, programming, debugging, and documenting software) and may work with customers to resolve challenges and respond to suggestions for improvements and enhancements. You will setup the standard and principal for the product he/she drives.
- Setup coding practice, guidelines & quality of the software delivered.
- Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions.
- Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code.
- Prepares and installs solutions by determining and designing system specifications, standards, and programming.
- Improves operations by conducting systems analysis; recommending changes in policies and procedures.
- Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations.
- Protects operations by keeping information confidential.
- Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle. Who are you? You are a go-getter, with an eye for detail, strong problem-solving and debugging skills, and having a degree in BE/MCA/M.E./ M Tech degree or equivalent degree from reputed college/university.
Essential Skills / Experience:
- 10+ years of engineering experience
- Experience in designing and developing high volume web-services using API protocols and data formats
- Proficient in API modelling languages and annotation
- Proficient in Java programming
- Experience with Scala programming
- Experience with ETL systems
- Experience with Agile methodologies
- Experience with Cloud service & storage
- Proficient in Unix/Linux operating systems
- Excellent oral and written communication skills Preferred:
- Functional programming languages (Scala, etc)
- Scripting languages (bash, Perl, Python, etc)
- Amazon Web Services (Redshift, ECS etc)