stm32f100 Queues & interrupts

filipzh wrote on Tuesday, March 05, 2013:

Hi
I have a hard fault problem when sending data into a queue from an interrupt (i2c), if the array is filled with a few bytes <12 it doesn’t crash, but when I send the array with eg 20 bytes I get the hard-fault. I have read the M3 book and searched the web but cant figure out the problem. Can any one please look at my setup to see if I have misunderstood the priority handling? And I’m also wondering what a suitable array size would be?

freertosConfig.h

#define configLIBRARY_LOWEST_INTERRUPT_PRIORITY			15
#define configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY	5
/* The lowest priority. */
#define configKERNEL_INTERRUPT_PRIORITY 	( configLIBRARY_LOWEST_INTERRUPT_PRIORITY << (8 - configPRIO_BITS) )
/* Priority 5, or 95 as only the top four bits are implemented. */
/* !!!! configMAX_SYSCALL_INTERRUPT_PRIORITY must not be set to zero !!!!
See http://www.FreeRTOS.org/RTOS-Cortex-M3-M4.html. */
#define configMAX_SYSCALL_INTERRUPT_PRIORITY 	( configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY << (8 - configPRIO_BITS) )

interrupt setup

//Interuptconfig, viktigt att I2C har högst prio, se errata.
    NVIC_InitStructure.NVIC_IRQChannel = I2C1_EV_IRQn;
    NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 11; 	//Lågt värde => hög prio
    NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;			//Lågt värde => hög prio
    NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
    NVIC_Init(&NVIC_InitStructure);
    NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 12; 	//Lågt värde => hög prio
    NVIC_InitStructure.NVIC_IRQChannel = I2C1_ER_IRQn;
    NVIC_Init(&NVIC_InitStructure);
    
    NVIC_SetPriority(I2C1_ER_IRQn,10);
    NVIC_SetPriority(I2C1_EV_IRQn,11);
    // term_I2C enable
    I2C_Cmd(term_I2C, ENABLE);

I2C ISR

portBASE_TYPE xHigherPriorityTaskWoken = pdFALSE;
if (I2C_GetITStatus(term_I2C, I2C_IT_RXNE))	{
vBMT_ReciveByte(&xHigherPriorityTaskWoken);
		I2C_ClearITPendingBit(term_I2C, I2C_IT_RXNE);
}
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
vBMT_ReciveByte()
save data to RxBuf until complete package
if(xQueueSendToBackFromISR(xNMTRxQueue, &RxBuf, xHigherPriorityTaskWoken)==errQUEUE_FULL){
				//TODO:Lyckades inte skriva till kön gör en reset!?
}
goes back to handler

friedl wrote on Tuesday, March 05, 2013:

How did you set up the queue?

Why are you calling portEND_SWITCHING_ISR(xHigherPriorityTaskWoken) BEFORE xQueueSendToBackFromISR()?

richard_damon wrote on Tuesday, March 05, 2013:

To expand on Friedl’s comment, portEND_SWITCHING_ISR is supposed to be the VERY LAST step of your ISR, for many (most?, all?) ports, what follows might not get run until the task that was interrupted get run again.

filipzh wrote on Tuesday, March 05, 2013:

DMS_DATA is 122bytes struct.

xNMTRxQueue = xQueueCreate( 	2, sizeof( DMS_DATA ) );
vQueueAddToRegistry( xNMTRxQueue, (signed char*)"NMT Rx Queue" );

My code example above might be a bit unclear, I don’t call portEND_SWITCHING before, its a function(vBMT_ReciveByte(…)) call before witch sends to the queue.

filipzh wrote on Tuesday, March 05, 2013:

if (I2C_GetITStatus(term_I2C, I2C_IT_RXNE)) {
vBMT_ReciveByte(&xHigherPriorityTaskWoken);
I2C_ClearITPendingBit(term_I2C, I2C_IT_RXNE);
} portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}

vBMT_ReciveByte(......){
save data to RxBuf until complete package then:
    if(xQueueSendToBackFromISR(xNMTRxQueue, &RxBuf, xHigherPriorityTaskWoken)==errQUEUE_FULL){              
                        //TODO:
    } 
}goes back to handler & the portEND_SW

Tried using a semaphore instead and got the same error so its something with the context switch that fails

rtel wrote on Tuesday, March 05, 2013:

Are you calling:

NVIC_PriorityGroupConfig( NVIC_PriorityGroup_4 );

anywhere during your initialisation to ensure all 4 priority bits are set as preemption priority rather than sub priority?  In my experience this is necessary when using STM32 parts - whereas other manufacturers parts default to that anyway.

Regards.

filipzh wrote on Wednesday, March 06, 2013:

Added that line before initializeing my nicv struct, The problem is still there.
It seams that call have the same effect as:

NVIC_SetPriority(I2C1_ER_IRQn,10);

I used the NIVC_getPriority(I2C1_ER_IRQn) to check the priority and its set correctly.

davedoors wrote on Wednesday, March 06, 2013:

The functions do different things. NVIC_SetPriority() sets the preemption and sub priority values for one interrupts, but does not tell the Cortex NVIC how many bits to use for each. You have to tell it how many bits to use for each before the values passed into NVIC_SetPriority() make sense. For example, you can try to set the sub priority to 2 in a call to NVIC_SetPriority, but the sub priority will still be 0 if NVIC_PriorityGroupConfig has set the number of bits to use for the sub priority to 0. Another example, if you use NVIC_PriorityGroupConfig to set the number of sub priority bits to 1, then using NVIC_SetPriority to set a sub priority value of 2 is invalid because only 0 and 1 are valid when a single bit is used.

filipzh wrote on Wednesday, March 06, 2013:

Ok this is my setup:

NVIC_PriorityGroupConfig( NVIC_PriorityGroup_4 );
    NVIC_InitStructure.NVIC_IRQChannel = I2C1_EV_IRQn;
    NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 11; 	//Lågt värde => hög prio
    NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;			//Lågt värde => hög prio
    NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
    NVIC_Init(&NVIC_InitStructure);
    NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 10; 	//Lågt värde => hög prio
    NVIC_InitStructure.NVIC_IRQChannel = I2C1_ER_IRQn;
    NVIC_Init(&NVIC_InitStructure);
    NVIC_SetPriority(I2C1_EV_IRQn,11);
    NVIC_SetPriority(I2C1_ER_IRQn,10);
    // term_I2C enable
    I2C_Cmd(term_I2C, ENABLE);

I’m thinking that it might have something to do with the ISR stack, where can I configure its size?
The error only appears when the received packed is long, so I must overwrite some memory.
I used the memory browser to see if my buffer overwrote any memory but it looks ok.

davedoors wrote on Wednesday, March 06, 2013:

ISRs use the stack configured by your linker or build project. The same stack used when main() is called.

filipzh wrote on Wednesday, March 06, 2013:

The issue is solved, I had written outside a buffer witch corrupted my stack. Thanks for the help and advice!
Regards